首页 > 最新文献

IET Computer Vision最新文献

英文 中文
Language guided 3D object detection in point clouds for MEP scenes 针对 MEP 场景的点云 3D 物体检测语言指导
IF 1.7 4区 计算机科学 Q4 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-12-12 DOI: 10.1049/cvi2.12261
Junjie Li, Shengli Du, Jianfeng Liu, Weibiao Chen, Manfu Tang, Lei Zheng, Lianfa Wang, Chunle Ji, Xiao Yu, Wanli Yu

In recent years, contrastive language-image pre-training (CLIP) has gained popularity for processing 2D data. However, the application of cross-modal transferable learning to 3D data remains a relatively unexplored area. In addition, high-quality, labelled point cloud data for Mechanical, Electrical, and Plumbing (MEP) scenarios are in short supply. To address this issue, the authors introduce a novel object detection system that employs 3D point clouds and 2D camera images, as well as text descriptions as input, using image-text matching knowledge to guide dense detection models for 3D point clouds in MEP environments. Specifically, the authors put forth the proposition of a language-guided point cloud modelling (PCM) module, which leverages the shared image weights inherent in the CLIP backbone. This is done with the aim of generating pertinent category information for the target, thereby augmenting the efficacy of 3D point cloud target detection. After sufficient experiments, the proposed point cloud detection system with the PCM module is proven to have a comparable performance with current state-of-the-art networks. The approach has 5.64% and 2.9% improvement in KITTI and SUN-RGBD, respectively. In addition, the same good detection results are obtained in their proposed MEP scene dataset.

近年来,对比语言图像预训练(CLIP)在处理二维数据方面越来越受欢迎。然而,将跨模态可迁移学习应用于三维数据仍是一个相对尚未开发的领域。此外,用于机械、电气和管道工程(MEP)场景的高质量、带标签的点云数据非常缺乏。为了解决这个问题,作者介绍了一种新颖的物体检测系统,该系统采用三维点云和二维相机图像以及文本描述作为输入,利用图像文本匹配知识来指导 MEP 环境中三维点云的密集检测模型。具体来说,作者提出了一个语言引导的点云建模(PCM)模块,该模块利用了 CLIP 骨干系统固有的共享图像权重。这样做的目的是为目标生成相关的类别信息,从而提高三维点云目标检测的效率。经过充分的实验证明,带有 PCM 模块的拟议点云检测系统具有与当前最先进网络相当的性能。该方法在 KITTI 和 SUN-RGBD 中分别提高了 5.64% 和 2.9%。此外,在他们提出的 MEP 场景数据集中也获得了同样良好的检测结果。
{"title":"Language guided 3D object detection in point clouds for MEP scenes","authors":"Junjie Li,&nbsp;Shengli Du,&nbsp;Jianfeng Liu,&nbsp;Weibiao Chen,&nbsp;Manfu Tang,&nbsp;Lei Zheng,&nbsp;Lianfa Wang,&nbsp;Chunle Ji,&nbsp;Xiao Yu,&nbsp;Wanli Yu","doi":"10.1049/cvi2.12261","DOIUrl":"10.1049/cvi2.12261","url":null,"abstract":"<p>In recent years, contrastive language-image pre-training (CLIP) has gained popularity for processing 2D data. However, the application of cross-modal transferable learning to 3D data remains a relatively unexplored area. In addition, high-quality, labelled point cloud data for Mechanical, Electrical, and Plumbing (MEP) scenarios are in short supply. To address this issue, the authors introduce a novel object detection system that employs 3D point clouds and 2D camera images, as well as text descriptions as input, using image-text matching knowledge to guide dense detection models for 3D point clouds in MEP environments. Specifically, the authors put forth the proposition of a language-guided point cloud modelling (PCM) module, which leverages the shared image weights inherent in the CLIP backbone. This is done with the aim of generating pertinent category information for the target, thereby augmenting the efficacy of 3D point cloud target detection. After sufficient experiments, the proposed point cloud detection system with the PCM module is proven to have a comparable performance with current state-of-the-art networks. The approach has 5.64% and 2.9% improvement in KITTI and SUN-RGBD, respectively. In addition, the same good detection results are obtained in their proposed MEP scene dataset.</p>","PeriodicalId":56304,"journal":{"name":"IET Computer Vision","volume":"18 4","pages":"526-539"},"PeriodicalIF":1.7,"publicationDate":"2023-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cvi2.12261","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139007927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep network with double reuses and convolutional shortcuts 具有双重复用和卷积捷径的深度网络
IF 1.7 4区 计算机科学 Q4 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-12-09 DOI: 10.1049/cvi2.12260
Qian Liu, Cunbao Wang

The authors design a novel convolutional network architecture, that is, deep network with double reuses and convolutional shortcuts, in which new compressed reuse units are presented. Compressed reuse units combine the reused features from the first 3 × 3 convolutional layer and the features from the last 3 × 3 convolutional layer to produce new feature maps in the current compressed reuse unit, simultaneously reuse the feature maps from all previous compressed reuse units to generate a shortcut by an 1 × 1 convolution, and then concatenate these new maps and this shortcut as the input to next compressed reuse unit. Deep network with double reuses and convolutional shortcuts uses the feature reuse concatenation from all compressed reuse units as the final features for classification. In deep network with double reuses and convolutional shortcuts, the inner- and outer-unit feature reuses and the convolutional shortcut compressed from the previous outer-unit feature reuses can alleviate the vanishing-gradient problem by strengthening the forward feature propagation inside and outside the units, improve the effectiveness of features and reduce calculation cost. Experimental results on CIFAR-10, CIFAR-100, ImageNet ILSVRC 2012, Pascal VOC2007 and MS COCO benchmark databases demonstrate the effectiveness of authors’ architecture for object recognition and detection, as compared with the state-of-the-art.

作者设计了一种新颖的卷积网络结构,即具有双重重用和卷积捷径的深度网络,其中提出了新的压缩重用单元。压缩重用单元将第一个3 × 3卷积层的重用特征与最后一个3 × 3卷积层的重用特征结合在一起,在当前压缩重用单元中生成新的特征映射,同时重用所有先前压缩重用单元的特征映射,通过1 × 1卷积生成一个快捷方式,然后将这些新映射和该快捷方式连接起来,作为下一个压缩重用单元的输入。具有双重重用和卷积捷径的深度网络使用所有压缩重用单元的特征重用串联作为最终特征进行分类。在具有双重重用和卷积捷径的深度网络中,内外单元特征重用和由之前的外单元特征重用压缩而成的卷积捷径可以通过加强特征在单元内外的前向传播来缓解梯度消失问题,提高特征的有效性,降低计算成本。在CIFAR‐10、CIFAR‐100、ImageNet ILSVRC 2012、Pascal VOC2007和MS COCO基准数据库上的实验结果表明,与目前的技术水平相比,作者的架构在目标识别和检测方面是有效的。
{"title":"Deep network with double reuses and convolutional shortcuts","authors":"Qian Liu,&nbsp;Cunbao Wang","doi":"10.1049/cvi2.12260","DOIUrl":"10.1049/cvi2.12260","url":null,"abstract":"<p>The authors design a novel convolutional network architecture, that is, deep network with double reuses and convolutional shortcuts, in which new compressed reuse units are presented. Compressed reuse units combine the reused features from the first 3 × 3 convolutional layer and the features from the last 3 × 3 convolutional layer to produce new feature maps in the current compressed reuse unit, simultaneously reuse the feature maps from all previous compressed reuse units to generate a shortcut by an 1 × 1 convolution, and then concatenate these new maps and this shortcut as the input to next compressed reuse unit. Deep network with double reuses and convolutional shortcuts uses the feature reuse concatenation from all compressed reuse units as the final features for classification. In deep network with double reuses and convolutional shortcuts, the inner- and outer-unit feature reuses and the convolutional shortcut compressed from the previous outer-unit feature reuses can alleviate the vanishing-gradient problem by strengthening the forward feature propagation inside and outside the units, improve the effectiveness of features and reduce calculation cost. Experimental results on CIFAR-10, CIFAR-100, ImageNet ILSVRC 2012, Pascal VOC2007 and MS COCO benchmark databases demonstrate the effectiveness of authors’ architecture for object recognition and detection, as compared with the state-of-the-art.</p>","PeriodicalId":56304,"journal":{"name":"IET Computer Vision","volume":"18 4","pages":"512-525"},"PeriodicalIF":1.7,"publicationDate":"2023-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cvi2.12260","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138585472","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MBMF: Constructing memory banks of multi-scale features for anomaly detection MBMF:为异常检测构建多尺度特征记忆库
IF 1.7 4区 计算机科学 Q4 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-12-01 DOI: 10.1049/cvi2.12258
Yanfeng Sun, Haitao Wang, Yongli Hu, Huajie Jiang, Baocai Yin

In industrial manufacturing, how to accurately classify defective products and locate the location of defects has always been a concern. Previous studies mainly measured similarity based on extracting single-scale features of samples. However, only using the features of a single scale is hard to represent different sizes and types of anomalies. Therefore, the authors propose a set of memory banks of multi-scale features (MBMF) to enrich feature representation and detect and locate various anomalies. To extract features of different scales, different aggregation functions are designed to produce the feature maps at different granularity. Based on the multi-scale features of normal samples, the MBMF are constructed. Meanwhile, to better adapt to the feature distribution of the training samples, the authors proposed a new iterative updating method for the memory banks. Testing on the widely used and challenging dataset of MVTec AD, the proposed MBMF achieves competitive image-level anomaly detection performance (Image-level Area Under the Receiver Operator Curve (AUROC)) and pixel-level anomaly segmentation performance (Pixel-level AUROC). To further evaluate the generalisation of the proposed method, we also implement anomaly detection on the BeanTech AD dataset, a commonly used dataset in the field of anomaly detection, and the Fashion-MNIST dataset, a widely used dataset in the field of image classification. The experimental results also verify the effectiveness of the proposed method.

在工业制造中,如何对不良品进行准确的分类和定位,一直是人们关注的问题。以往的研究主要是基于提取样本的单尺度特征来测量相似性。然而,仅使用单一尺度的特征很难表示不同规模和类型的异常。为此,作者提出了一套多尺度特征存储库(MBMF)来丰富特征表示,检测和定位各种异常。为了提取不同尺度的特征,设计了不同的聚合函数,生成不同粒度的特征图。基于正态样本的多尺度特征,构造了多尺度模型。同时,为了更好地适应训练样本的特征分布,作者提出了一种新的记忆库迭代更新方法。在广泛使用且具有挑战性的MVTec AD数据集上进行测试,所提出的MBMF实现了具有竞争力的图像级异常检测性能(图像级接收算子曲线下区域(AUROC))和像素级异常分割性能(像素级AUROC)。为了进一步评估所提出方法的泛化性,我们还在异常检测领域常用的BeanTech AD数据集和图像分类领域广泛使用的Fashion - MNIST数据集上实现了异常检测。实验结果也验证了该方法的有效性。
{"title":"MBMF: Constructing memory banks of multi-scale features for anomaly detection","authors":"Yanfeng Sun,&nbsp;Haitao Wang,&nbsp;Yongli Hu,&nbsp;Huajie Jiang,&nbsp;Baocai Yin","doi":"10.1049/cvi2.12258","DOIUrl":"10.1049/cvi2.12258","url":null,"abstract":"<p>In industrial manufacturing, how to accurately classify defective products and locate the location of defects has always been a concern. Previous studies mainly measured similarity based on extracting single-scale features of samples. However, only using the features of a single scale is hard to represent different sizes and types of anomalies. Therefore, the authors propose a set of memory banks of multi-scale features (MBMF) to enrich feature representation and detect and locate various anomalies. To extract features of different scales, different aggregation functions are designed to produce the feature maps at different granularity. Based on the multi-scale features of normal samples, the MBMF are constructed. Meanwhile, to better adapt to the feature distribution of the training samples, the authors proposed a new iterative updating method for the memory banks. Testing on the widely used and challenging dataset of MVTec AD, the proposed MBMF achieves competitive image-level anomaly detection performance (Image-level Area Under the Receiver Operator Curve (AUROC)) and pixel-level anomaly segmentation performance (Pixel-level AUROC). To further evaluate the generalisation of the proposed method, we also implement anomaly detection on the BeanTech AD dataset, a commonly used dataset in the field of anomaly detection, and the Fashion-MNIST dataset, a widely used dataset in the field of image classification. The experimental results also verify the effectiveness of the proposed method.</p>","PeriodicalId":56304,"journal":{"name":"IET Computer Vision","volume":"18 3","pages":"355-369"},"PeriodicalIF":1.7,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cvi2.12258","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138612082","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Point cloud semantic segmentation based on local feature fusion and multilayer attention network 基于局部特征融合和多层注意力网络的点云语义分割
IF 1.7 4区 计算机科学 Q4 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-11-27 DOI: 10.1049/cvi2.12255
Junjie Wen, Jie Ma, Yuehua Zhao, Tong Nie, Mengxuan Sun, Ziming Fan

Semantic segmentation from a three-dimensional point cloud is vital in autonomous driving, computer vision, and augmented reality. However, current semantic segmentation does not effectively use the point cloud's local geometric features and contextual information, essential for improving segmentation accuracy. A semantic segmentation network that uses local feature fusion and a multilayer attention mechanism is proposed to address these challenges. Specifically, the authors designed a local feature fusion module to encode the geometric and feature information separately, which fully leverages the point cloud's feature perception and geometric structure representation. Furthermore, the authors designed a multilayer attention pooling module consisting of local attention pooling and cascade attention pooling to extract contextual information. Local attention pooling is used to learn local neighbourhood information, and cascade attention pooling captures contextual information from deeper local neighbourhoods. Finally, an enhanced feature representation of important information is obtained by aggregating the features from the two deep attention pooling methods. Extensive experiments on large-scale point-cloud datasets Stanford 3D large-scale indoor spaces and SemanticKITTI indicate that authors network shows excellent advantages over existing representative methods regarding local geometric feature description and global contextual relationships.

三维点云的语义分割在自动驾驶、计算机视觉和增强现实中至关重要。然而,目前的语义分割并不能有效地利用点云的局部几何特征和上下文信息,而这对提高分割精度至关重要。为了应对这些挑战,作者提出了一种使用局部特征融合和多层注意机制的语义分割网络。具体来说,作者设计了一个局部特征融合模块,将几何信息和特征信息分开编码,充分利用了点云的特征感知和几何结构表示。此外,作者还设计了一个由局部注意力池和级联注意力池组成的多层注意力池模块来提取上下文信息。本地注意力池用于学习本地邻域信息,级联注意力池则从更深的本地邻域中捕捉上下文信息。最后,通过汇总两种深度注意力汇集方法的特征,获得重要信息的增强特征表征。在大规模点云数据集斯坦福三维大型室内空间和 SemanticKITTI 上进行的大量实验表明,作者网络在局部几何特征描述和全局上下文关系方面比现有的代表性方法具有更出色的优势。
{"title":"Point cloud semantic segmentation based on local feature fusion and multilayer attention network","authors":"Junjie Wen,&nbsp;Jie Ma,&nbsp;Yuehua Zhao,&nbsp;Tong Nie,&nbsp;Mengxuan Sun,&nbsp;Ziming Fan","doi":"10.1049/cvi2.12255","DOIUrl":"10.1049/cvi2.12255","url":null,"abstract":"<p>Semantic segmentation from a three-dimensional point cloud is vital in autonomous driving, computer vision, and augmented reality. However, current semantic segmentation does not effectively use the point cloud's local geometric features and contextual information, essential for improving segmentation accuracy. A semantic segmentation network that uses local feature fusion and a multilayer attention mechanism is proposed to address these challenges. Specifically, the authors designed a local feature fusion module to encode the geometric and feature information separately, which fully leverages the point cloud's feature perception and geometric structure representation. Furthermore, the authors designed a multilayer attention pooling module consisting of local attention pooling and cascade attention pooling to extract contextual information. Local attention pooling is used to learn local neighbourhood information, and cascade attention pooling captures contextual information from deeper local neighbourhoods. Finally, an enhanced feature representation of important information is obtained by aggregating the features from the two deep attention pooling methods. Extensive experiments on large-scale point-cloud datasets Stanford 3D large-scale indoor spaces and SemanticKITTI indicate that authors network shows excellent advantages over existing representative methods regarding local geometric feature description and global contextual relationships.</p>","PeriodicalId":56304,"journal":{"name":"IET Computer Vision","volume":"18 3","pages":"381-392"},"PeriodicalIF":1.7,"publicationDate":"2023-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cvi2.12255","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139233156","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Anti-occlusion person re-identification via body topology information restoration and similarity evaluation 通过身体拓扑信息还原和相似性评估进行反咬合人员再识别
IF 1.7 4区 计算机科学 Q4 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-11-27 DOI: 10.1049/cvi2.12256
Chunyun Meng, Ernest Domanaanmwi Ganaa, Bin Wu, Zhen Tan, Li Luan

In real-world scenarios, pedestrian images often suffer from occlusion, where certain body features become invisible, making it challenging for existing methods to accurately identify pedestrians with the same ID. Traditional approaches typically focus on matching only the visible body parts, which can lead to misalignment when the occlusion patterns vary. To address this issue and alleviate misalignment in occluded pedestrian images, the authors propose a novel framework called body topology information generation and matching. The framework consists of two main modules: the body topology information generation module and the body topology information matching module. The body topology information generation module employs an adaptive detection mechanism and capsule generative adversarial network to restore a holistic pedestrian image while preserving the body topology information. The body topology information matching module leverages the restored holistic image from body topology information generation to overcome spatial misalignment and utilises cosine distance as the similarity measure for matching. By combining the body topology information generation and body topology information matching modules, the authors achieve consistency in the body topology information features of pedestrian images, ranging from restoration to retrieval. Extensive experiments are conducted on both holistic person re-identification datasets (Market-1501, DukeMTMC-ReID) and occluded person re-identification datasets (Occluded-DukeMTMC, Occluded-ReID). The results demonstrate the superior performance of the authors proposed model, and visualisations of the generation and matching modules are provided to illustrate their effectiveness. Furthermore, an ablation study is conducted to validate the contributions of the proposed framework.

在现实世界的场景中,行人图像经常会出现遮挡,某些身体特征变得不可见,这使得现有方法难以准确识别具有相同身份标识的行人。传统方法通常只注重匹配可见的身体部位,当遮挡模式发生变化时,可能会导致错位。为解决这一问题并减轻遮挡行人图像中的不对齐现象,作者提出了一种名为 "身体拓扑信息生成与匹配 "的新型框架。该框架由两个主要模块组成:人体拓扑信息生成模块和人体拓扑信息匹配模块。人体拓扑信息生成模块采用自适应检测机制和胶囊生成式对抗网络来还原整体行人图像,同时保留人体拓扑信息。人体拓扑信息匹配模块利用从人体拓扑信息生成中恢复的整体图像来克服空间错位,并利用余弦距离作为匹配的相似度量。通过将人体拓扑信息生成模块和人体拓扑信息匹配模块相结合,作者实现了行人图像的人体拓扑信息特征从还原到检索的一致性。在整体人物再识别数据集(Market-1501、DukeMTMC-ReID)和隐蔽人物再识别数据集(Occluded-DukeMTMC、Occluded-ReID)上进行了广泛的实验。结果表明,作者提出的模型性能优越,并提供了生成和匹配模块的可视化效果图,以说明其有效性。此外,还进行了一项消融研究,以验证所提框架的贡献。
{"title":"Anti-occlusion person re-identification via body topology information restoration and similarity evaluation","authors":"Chunyun Meng,&nbsp;Ernest Domanaanmwi Ganaa,&nbsp;Bin Wu,&nbsp;Zhen Tan,&nbsp;Li Luan","doi":"10.1049/cvi2.12256","DOIUrl":"10.1049/cvi2.12256","url":null,"abstract":"<p>In real-world scenarios, pedestrian images often suffer from occlusion, where certain body features become invisible, making it challenging for existing methods to accurately identify pedestrians with the same ID. Traditional approaches typically focus on matching only the visible body parts, which can lead to misalignment when the occlusion patterns vary. To address this issue and alleviate misalignment in occluded pedestrian images, the authors propose a novel framework called body topology information generation and matching. The framework consists of two main modules: the body topology information generation module and the body topology information matching module. The body topology information generation module employs an adaptive detection mechanism and capsule generative adversarial network to restore a holistic pedestrian image while preserving the body topology information. The body topology information matching module leverages the restored holistic image from body topology information generation to overcome spatial misalignment and utilises cosine distance as the similarity measure for matching. By combining the body topology information generation and body topology information matching modules, the authors achieve consistency in the body topology information features of pedestrian images, ranging from restoration to retrieval. Extensive experiments are conducted on both holistic person re-identification datasets (Market-1501, DukeMTMC-ReID) and occluded person re-identification datasets (Occluded-DukeMTMC, Occluded-ReID). The results demonstrate the superior performance of the authors proposed model, and visualisations of the generation and matching modules are provided to illustrate their effectiveness. Furthermore, an ablation study is conducted to validate the contributions of the proposed framework.</p>","PeriodicalId":56304,"journal":{"name":"IET Computer Vision","volume":"18 3","pages":"393-404"},"PeriodicalIF":1.7,"publicationDate":"2023-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cvi2.12256","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139232904","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Semi-supervised domain adaptation via subspace exploration 通过子空间探索实现半监督领域适应
IF 1.7 4区 计算机科学 Q4 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-11-27 DOI: 10.1049/cvi2.12254
Zheng Han, Xiaobin Zhu, Chun Yang, Zhiyu Fang, Jingyan Qin, Xucheng Yin

Recent methods of learning latent representations in Domain Adaptation (DA) often entangle the learning of features and exploration of latent space into a unified process. However, these methods can cause a false alignment problem and do not generalise well to the alignment of distributions with large discrepancy. In this study, the authors propose to explore a robust subspace for Semi-Supervised Domain Adaptation (SSDA) explicitly. To be concrete, for disentangling the intricate relationship between feature learning and subspace exploration, the authors iterate and optimise them in two steps: in the first step, the authors aim to learn well-clustered latent representations by aggregating the target feature around the estimated class-wise prototypes; in the second step, the authors adaptively explore a subspace of an autoencoder for robust SSDA. Specially, a novel denoising strategy via class-agnostic disturbance to improve the discriminative ability of subspace is adopted. Extensive experiments on publicly available datasets verify the promising and competitive performance of our approach against state-of-the-art methods.

最近的领域适应(DA)潜表征学习方法通常将特征学习和潜空间探索整合为一个统一的过程。然而,这些方法可能会导致错误配准问题,而且不能很好地推广到差异较大的分布配准。在这项研究中,作者提出为半监督领域适应(SSDA)明确探索一个稳健的子空间。具体来说,为了厘清特征学习和子空间探索之间错综复杂的关系,作者分两步对它们进行了迭代和优化:第一步,作者旨在通过将目标特征聚合在估计的类原型周围来学习聚类良好的潜在表征;第二步,作者自适应地探索自编码器的子空间,以实现稳健的 SSDA。特别是,作者采用了一种新颖的去噪策略,通过类无关干扰来提高子空间的判别能力。在公开数据集上进行的大量实验验证了我们的方法与最先进的方法相比具有良好的前景和竞争力。
{"title":"Semi-supervised domain adaptation via subspace exploration","authors":"Zheng Han,&nbsp;Xiaobin Zhu,&nbsp;Chun Yang,&nbsp;Zhiyu Fang,&nbsp;Jingyan Qin,&nbsp;Xucheng Yin","doi":"10.1049/cvi2.12254","DOIUrl":"10.1049/cvi2.12254","url":null,"abstract":"<p>Recent methods of learning latent representations in Domain Adaptation (DA) often entangle the learning of features and exploration of latent space into a unified process. However, these methods can cause a false alignment problem and do not generalise well to the alignment of distributions with large discrepancy. In this study, the authors propose to explore a robust subspace for Semi-Supervised Domain Adaptation (SSDA) explicitly. To be concrete, for disentangling the intricate relationship between feature learning and subspace exploration, the authors iterate and optimise them in two steps: in the first step, the authors aim to learn well-clustered latent representations by aggregating the target feature around the estimated class-wise prototypes; in the second step, the authors adaptively explore a subspace of an autoencoder for robust SSDA. Specially, a novel denoising strategy via class-agnostic disturbance to improve the discriminative ability of subspace is adopted. Extensive experiments on publicly available datasets verify the promising and competitive performance of our approach against state-of-the-art methods.</p>","PeriodicalId":56304,"journal":{"name":"IET Computer Vision","volume":"18 3","pages":"370-380"},"PeriodicalIF":1.7,"publicationDate":"2023-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cvi2.12254","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139229171","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Spatio-Temporal Enhanced Graph-Transformer AutoEncoder embedded pose for anomaly detection 用于异常检测的时空增强图变换器自动编码器嵌入式姿势
IF 1.7 4区 计算机科学 Q4 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-11-23 DOI: 10.1049/cvi2.12257
Honglei Zhu, Pengjuan Wei, Zhigang Xu

Due to the robustness of skeleton data to human scale, illumination changes, dynamic camera views, and complex backgrounds, great progress has been made in skeleton-based video anomaly detection in recent years. The spatio-temporal graph convolutional network has been proven to be effective in modelling the spatio-temporal dependencies of non-Euclidean data such as human skeleton graphs, and the autoencoder based on this basic unit is widely used to model sequence features. However, due to the limitations of the convolution kernel, the model cannot capture the correlation between non-adjacent joints, and it is difficult to deal with long-term sequences, resulting in an insufficient understanding of behaviour. To address this issue, this paper applies the Transformer to the human skeleton and proposes the Spatio-Temporal Enhanced Graph-Transformer AutoEncoder (STEGT-AE) to improve the capability of modelling. In addition, the multi-memory model with skip connections is employed to provide different levels of coding features, thereby enhancing the ability of the model to distinguish similar heterogeneous behaviours. Furthermore, the STEGT-AE has a single encoder-double decoder architecture, which can improve the detection performance by the combining reconstruction and prediction error. The experimental results show that performances of STEGT-AE is significantly better than other advanced algorithms on four baseline datasets.

由于骨架数据对人体比例、光照变化、动态摄像机视图和复杂背景的鲁棒性,近年来基于骨架的视频异常检测取得了很大进展。时空图卷积网络已被证明能有效地模拟人体骨架图等非欧几里得数据的时空相关性,基于该基本单元的自动编码器也被广泛用于序列特征建模。然而,由于卷积核的局限性,该模型无法捕捉非相邻关节之间的相关性,难以处理长期序列,导致对行为的理解不够充分。针对这一问题,本文将变换器应用于人体骨骼,并提出了时空增强图变换器自动编码器(STEGT-AE),以提高建模能力。此外,还采用了具有跳转连接的多内存模型,以提供不同层次的编码特征,从而提高模型区分类似异质行为的能力。此外,STEGT-AE 采用了单编码器-双解码器结构,可以通过结合重构和预测误差来提高检测性能。实验结果表明,在四个基准数据集上,STEGT-AE 的性能明显优于其他先进算法。
{"title":"A Spatio-Temporal Enhanced Graph-Transformer AutoEncoder embedded pose for anomaly detection","authors":"Honglei Zhu,&nbsp;Pengjuan Wei,&nbsp;Zhigang Xu","doi":"10.1049/cvi2.12257","DOIUrl":"10.1049/cvi2.12257","url":null,"abstract":"<p>Due to the robustness of skeleton data to human scale, illumination changes, dynamic camera views, and complex backgrounds, great progress has been made in skeleton-based video anomaly detection in recent years. The spatio-temporal graph convolutional network has been proven to be effective in modelling the spatio-temporal dependencies of non-Euclidean data such as human skeleton graphs, and the autoencoder based on this basic unit is widely used to model sequence features. However, due to the limitations of the convolution kernel, the model cannot capture the correlation between non-adjacent joints, and it is difficult to deal with long-term sequences, resulting in an insufficient understanding of behaviour. To address this issue, this paper applies the Transformer to the human skeleton and proposes the Spatio-Temporal Enhanced Graph-Transformer AutoEncoder (STEGT-AE) to improve the capability of modelling. In addition, the multi-memory model with skip connections is employed to provide different levels of coding features, thereby enhancing the ability of the model to distinguish similar heterogeneous behaviours. Furthermore, the STEGT-AE has a single encoder-double decoder architecture, which can improve the detection performance by the combining reconstruction and prediction error. The experimental results show that performances of STEGT-AE is significantly better than other advanced algorithms on four baseline datasets.</p>","PeriodicalId":56304,"journal":{"name":"IET Computer Vision","volume":"18 3","pages":"405-419"},"PeriodicalIF":1.7,"publicationDate":"2023-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cvi2.12257","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139246264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Decoder Structure Guided CNN-Transformer Network for face super-resolution 用于人脸超分辨率的解码器结构引导的 CNN 变换器网络
IF 1.7 4区 计算机科学 Q4 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-11-22 DOI: 10.1049/cvi2.12251
Rui Dou, Jiawen Li, Xujie Wan, Heyou Chang, Hao Zheng, Guangwei Gao

Recent advances in deep convolutional neural networks have shown improved performance in face super-resolution through joint training with other tasks such as face analysis and landmark prediction. However, these methods have certain limitations. One major limitation is the requirement for manual marking information on the dataset for multi-task joint learning. This additional marking process increases the computational cost of the network model. Additionally, since prior information is often estimated from low-quality faces, the obtained guidance information tends to be inaccurate. To address these challenges, a novel Decoder Structure Guided CNN-Transformer Network (DCTNet) is introduced, which utilises the newly proposed Global-Local Feature Extraction Unit (GLFEU) for effective embedding. Specifically, the proposed GLFEU is composed of an attention branch and a Transformer branch, to simultaneously restore global facial structure and local texture details. Additionally, a Multi-Stage Feature Fusion Module is incorporated to fuse features from different network stages, further improving the quality of the restored face images. Compared with previous methods, DCTNet improves Peak Signal-to-Noise Ratio by 0.23 and 0.19 dB on the CelebA and Helen datasets, respectively. Experimental results demonstrate that the designed DCTNet offers a simple yet powerful solution to recover detailed facial structures from low-quality images.

深度卷积神经网络的最新进展表明,通过与人脸分析和地标预测等其他任务进行联合训练,人脸超分辨率的性能得到了提高。然而,这些方法都有一定的局限性。其中一个主要限制是,多任务联合学习需要在数据集上手动标记信息。这一额外的标记过程增加了网络模型的计算成本。此外,由于先验信息通常是从低质量的人脸中估算出来的,因此获得的引导信息往往不准确。为了应对这些挑战,我们引入了一种新型解码器结构引导的 CNN 变换器网络(DCTNet),它利用新提出的全局-局部特征提取单元(GLFEU)进行有效嵌入。具体来说,拟议的 GLFEU 由注意力分支和变换器分支组成,可同时还原全局面部结构和局部纹理细节。此外,还加入了多阶段特征融合模块,将不同网络阶段的特征进行融合,进一步提高了还原人脸图像的质量。与之前的方法相比,DCTNet 在 CelebA 和 Helen 数据集上的峰值信噪比分别提高了 0.23 和 0.19 dB。实验结果表明,所设计的 DCTNet 为从低质量图像中恢复详细的面部结构提供了一种简单而强大的解决方案。
{"title":"A Decoder Structure Guided CNN-Transformer Network for face super-resolution","authors":"Rui Dou,&nbsp;Jiawen Li,&nbsp;Xujie Wan,&nbsp;Heyou Chang,&nbsp;Hao Zheng,&nbsp;Guangwei Gao","doi":"10.1049/cvi2.12251","DOIUrl":"10.1049/cvi2.12251","url":null,"abstract":"<p>Recent advances in deep convolutional neural networks have shown improved performance in face super-resolution through joint training with other tasks such as face analysis and landmark prediction. However, these methods have certain limitations. One major limitation is the requirement for manual marking information on the dataset for multi-task joint learning. This additional marking process increases the computational cost of the network model. Additionally, since prior information is often estimated from low-quality faces, the obtained guidance information tends to be inaccurate. To address these challenges, a novel Decoder Structure Guided CNN-Transformer Network (DCTNet) is introduced, which utilises the newly proposed Global-Local Feature Extraction Unit (GLFEU) for effective embedding. Specifically, the proposed GLFEU is composed of an attention branch and a Transformer branch, to simultaneously restore global facial structure and local texture details. Additionally, a Multi-Stage Feature Fusion Module is incorporated to fuse features from different network stages, further improving the quality of the restored face images. Compared with previous methods, DCTNet improves Peak Signal-to-Noise Ratio by 0.23 and 0.19 dB on the CelebA and Helen datasets, respectively. Experimental results demonstrate that the designed DCTNet offers a simple yet powerful solution to recover detailed facial structures from low-quality images.</p>","PeriodicalId":56304,"journal":{"name":"IET Computer Vision","volume":"18 4","pages":"473-484"},"PeriodicalIF":1.7,"publicationDate":"2023-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cvi2.12251","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139247701","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Scene context-aware graph convolutional network for skeleton-based action recognition 用于基于骨骼的动作识别的场景上下文感知图卷积网络
IF 1.7 4区 计算机科学 Q4 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-11-17 DOI: 10.1049/cvi2.12253
Wenxian Zhang

Skeleton-based action recognition methods commonly employ graph neural networks to learn different aspects of skeleton topology information However, these methods often struggle to capture contextual information beyond the skeleton topology. To address this issue, a Scene Context-aware Graph Convolutional Network (SCA-GCN) that leverages potential contextual information in the scene is proposed. Specifically, SCA-GCN learns the co-occurrence probabilities of actions in specific scenarios from a common knowledge base and fuses these probabilities into the original skeleton topology decoder, producing more robust results. To demonstrate the effectiveness of SCA-GCN, extensive experiments on four widely used datasets, that is, SBU, N-UCLA, NTU RGB + D, and NTU RGB + D 120 are conducted. The experimental results show that SCA-GCN surpasses existing methods, and its core idea can be extended to other methods with only some concatenation operations that consume less computational complexity.

基于骨架的动作识别方法通常采用图神经网络来学习骨架拓扑结构的不同方面信息,但这些方法往往难以捕捉骨架拓扑结构以外的上下文信息。为了解决这个问题,我们提出了一种场景上下文感知图卷积网络(SCA-GCN),它能充分利用场景中潜在的上下文信息。具体来说,SCA-GCN 从一个共同的知识库中学习特定场景中动作的共现概率,并将这些概率融合到原始骨架拓扑解码器中,从而产生更稳健的结果。为了证明 SCA-GCN 的有效性,我们在四个广泛使用的数据集(即 SBU、N-UCLA、NTU RGB + D 和 NTU RGB + D 120)上进行了大量实验。实验结果表明,SCA-GCN 超越了现有的方法,其核心思想可以扩展到其他方法,只需进行一些连接操作,计算复杂度较低。
{"title":"Scene context-aware graph convolutional network for skeleton-based action recognition","authors":"Wenxian Zhang","doi":"10.1049/cvi2.12253","DOIUrl":"10.1049/cvi2.12253","url":null,"abstract":"<p>Skeleton-based action recognition methods commonly employ graph neural networks to learn different aspects of skeleton topology information However, these methods often struggle to capture contextual information beyond the skeleton topology. To address this issue, a Scene Context-aware Graph Convolutional Network (SCA-GCN) that leverages potential contextual information in the scene is proposed. Specifically, SCA-GCN learns the co-occurrence probabilities of actions in specific scenarios from a common knowledge base and fuses these probabilities into the original skeleton topology decoder, producing more robust results. To demonstrate the effectiveness of SCA-GCN, extensive experiments on four widely used datasets, that is, SBU, N-UCLA, NTU RGB + D, and NTU RGB + D 120 are conducted. The experimental results show that SCA-GCN surpasses existing methods, and its core idea can be extended to other methods with only some concatenation operations that consume less computational complexity.</p>","PeriodicalId":56304,"journal":{"name":"IET Computer Vision","volume":"18 3","pages":"343-354"},"PeriodicalIF":1.7,"publicationDate":"2023-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cvi2.12253","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139263769","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CR-Net: Robot grasping detection method integrating convolutional block attention module and residual module CR-Net:集成卷积块注意模块和残差模块的机器人抓取检测方法
IF 1.7 4区 计算机科学 Q4 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-11-11 DOI: 10.1049/cvi2.12252
Song Yan, Lei Zhang

Grasping detection, which involves identifying and assessing the grasp ability of objects by robotic systems, has garnered significant attention in recent years due to its pivotal role in the development of robotic systems and automated assembly processes. Despite notable advancements in this field, current methods often grapple with both practical and theoretical challenges that hinder their real-world applicability. These challenges encompass low detection accuracy, the burden of oversized model parameters, and the inherent complexity of real-world scenarios. In response to these multifaceted challenges, a novel lightweight grasping detection model that not only addresses the technical aspects but also delves into the underlying theoretical complexities is introduced. The proposed model incorporates attention mechanisms and residual modules to tackle the theoretical challenges posed by varying object shapes, sizes, materials, and environmental conditions. To enhance its performance in the face of these theoretical complexities, the proposed model employs a Convolutional Block Attention Module (CBAM) to extract features from RGB and depth channels, recognising the multifaceted nature of object properties. Subsequently, a feature fusion module effectively combines these diverse features, providing a solution to the theoretical challenge of information integration. The model then processes the fused features through five residual blocks, followed by another CBAM attention module, culminating in the generation of three distinct images representing capture quality, grasping angle, and grasping width. These images collectively yield the final grasp detection results, addressing the theoretical complexities inherent in this task. The proposed model's rigorous training and evaluation on the Cornell Grasp dataset demonstrate remarkable detection accuracy rates of 98.44% on the Image-wise split and 96.88% on the Object-wise split. The experimental results strongly corroborate the exceptional performance of the proposed model, underscoring its ability to overcome the theoretical challenges associated with grasping detection. The integration of the residual module ensures rapid training, while the attention module facilitates precise feature extraction, ultimately striking an effective balance between detection time and accuracy.

抓取检测涉及识别和评估机器人系统抓取物体的能力,由于其在机器人系统和自动装配流程开发中的关键作用,近年来已引起了广泛关注。尽管在这一领域取得了显著进步,但目前的方法往往面临着实践和理论两方面的挑战,阻碍了它们在现实世界中的应用。这些挑战包括检测精度低、模型参数过大以及现实世界场景固有的复杂性。为了应对这些多方面的挑战,我们提出了一种新颖的轻量级抓取检测模型,它不仅解决了技术方面的问题,还深入研究了其背后的理论复杂性。所提出的模型结合了注意力机制和残差模块,以应对不同物体形状、大小、材料和环境条件带来的理论挑战。面对这些理论上的复杂性,为了提高其性能,所提出的模型采用了卷积块注意力模块(CBAM),从 RGB 和深度通道中提取特征,以识别物体属性的多面性。随后,特征融合模块将这些不同的特征有效地结合在一起,为信息整合这一理论难题提供了解决方案。然后,该模型通过五个残差块处理融合后的特征,再通过另一个 CBAM 注意模块进行处理,最终生成代表捕捉质量、抓取角度和抓取宽度的三幅不同图像。这些图像共同产生了最终的抓取检测结果,解决了这一任务固有的理论复杂性问题。在康奈尔抓取数据集上对所提出的模型进行了严格的训练和评估,结果表明,该模型的图像检测准确率高达 98.44%,物体检测准确率高达 96.88%。实验结果有力地证明了所提出模型的卓越性能,突出了其克服与抓取检测相关的理论挑战的能力。残差模块的集成确保了快速训练,而注意力模块则有助于精确提取特征,最终在检测时间和准确性之间取得了有效的平衡。
{"title":"CR-Net: Robot grasping detection method integrating convolutional block attention module and residual module","authors":"Song Yan,&nbsp;Lei Zhang","doi":"10.1049/cvi2.12252","DOIUrl":"10.1049/cvi2.12252","url":null,"abstract":"<p>Grasping detection, which involves identifying and assessing the grasp ability of objects by robotic systems, has garnered significant attention in recent years due to its pivotal role in the development of robotic systems and automated assembly processes. Despite notable advancements in this field, current methods often grapple with both practical and theoretical challenges that hinder their real-world applicability. These challenges encompass low detection accuracy, the burden of oversized model parameters, and the inherent complexity of real-world scenarios. In response to these multifaceted challenges, a novel lightweight grasping detection model that not only addresses the technical aspects but also delves into the underlying theoretical complexities is introduced. The proposed model incorporates attention mechanisms and residual modules to tackle the theoretical challenges posed by varying object shapes, sizes, materials, and environmental conditions. To enhance its performance in the face of these theoretical complexities, the proposed model employs a Convolutional Block Attention Module (CBAM) to extract features from RGB and depth channels, recognising the multifaceted nature of object properties. Subsequently, a feature fusion module effectively combines these diverse features, providing a solution to the theoretical challenge of information integration. The model then processes the fused features through five residual blocks, followed by another CBAM attention module, culminating in the generation of three distinct images representing capture quality, grasping angle, and grasping width. These images collectively yield the final grasp detection results, addressing the theoretical complexities inherent in this task. The proposed model's rigorous training and evaluation on the Cornell Grasp dataset demonstrate remarkable detection accuracy rates of 98.44% on the Image-wise split and 96.88% on the Object-wise split. The experimental results strongly corroborate the exceptional performance of the proposed model, underscoring its ability to overcome the theoretical challenges associated with grasping detection. The integration of the residual module ensures rapid training, while the attention module facilitates precise feature extraction, ultimately striking an effective balance between detection time and accuracy.</p>","PeriodicalId":56304,"journal":{"name":"IET Computer Vision","volume":"18 3","pages":"420-433"},"PeriodicalIF":1.7,"publicationDate":"2023-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cvi2.12252","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135041680","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IET Computer Vision
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1