首页 > 最新文献

Pattern Recognition最新文献

英文 中文
KSOF: Leveraging kinematics and spatio-temporal optimal fusion for human motion prediction 基于运动学和时空优化融合的人体运动预测
IF 7.5 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-29 DOI: 10.1016/j.patcog.2024.111206
Rui Ding , KeHua Qu , Jin Tang
Ignoring the meaningful kinematics law, which generates improbable or impractical predictions, is one of the obstacles to human motion prediction. Current methods attempt to tackle this problem by taking simple kinematics information as auxiliary features to improve predictions. However, it remains challenging to utilize human prior knowledge deeply, such as the trajectory formed by the same joint should be smooth and continuous in this task. In this paper, we advocate explicitly describing kinematics information via velocity and acceleration by proposing a novel loss called joint point smoothness (JPS) loss, which calculates the acceleration of joints to smooth the sudden change in joint velocity. In addition, capturing spatio-temporal dependencies to make feature representations more informative is also one of the obstacles in this task. Therefore, we propose a dual-path network (KSOF) that models the temporal and spatial dependencies from kinematic temporal convolutional network (K-TCN) and spatial graph convolutional networks (S-GCN), respectively. Moreover, we propose a novel multi-scale fusion module named spatio-temporal optimal fusion (SOF) to enhance extraction of the essential correlation and important features at different scales from spatio-temporal coupling features. We evaluate our approach on three standard benchmark datasets, including Human3.6M, CMU-Mocap, and 3DPW datasets. For both short-term and long-term predictions, our method achieves outstanding performance on all these datasets. The code is available at https://github.com/qukehua/KSOF.
忽视有意义的运动学规律,产生不可能或不切实际的预测,是人体运动预测的障碍之一。目前的方法试图通过简单的运动学信息作为辅助特征来改进预测来解决这个问题。然而,如何深入利用人类的先验知识仍然是一个挑战,例如同一关节形成的轨迹必须是光滑连续的。在本文中,我们提倡通过速度和加速度显式描述运动学信息,提出了一种新的损失,称为关节点平滑损失(JPS),它计算关节的加速度,以平滑关节速度的突然变化。此外,捕获时空依赖关系以使特征表示更具信息性也是该任务的障碍之一。因此,我们提出了一种双路径网络(KSOF),分别从运动学时间卷积网络(K-TCN)和空间图卷积网络(S-GCN)中建模时间和空间依赖关系。此外,我们提出了一种新的多尺度融合模块——时空最优融合(SOF),以增强从时空耦合特征中提取不同尺度的本质相关性和重要特征。我们在三个标准基准数据集上评估了我们的方法,包括Human3.6M, mu - mocap和3DPW数据集。对于短期和长期预测,我们的方法在所有这些数据集上都取得了出色的性能。代码可在https://github.com/qukehua/KSOF上获得。
{"title":"KSOF: Leveraging kinematics and spatio-temporal optimal fusion for human motion prediction","authors":"Rui Ding ,&nbsp;KeHua Qu ,&nbsp;Jin Tang","doi":"10.1016/j.patcog.2024.111206","DOIUrl":"10.1016/j.patcog.2024.111206","url":null,"abstract":"<div><div>Ignoring the meaningful kinematics law, which generates improbable or impractical predictions, is one of the obstacles to human motion prediction. Current methods attempt to tackle this problem by taking simple kinematics information as auxiliary features to improve predictions. However, it remains challenging to utilize human prior knowledge deeply, such as the trajectory formed by the same joint should be smooth and continuous in this task. In this paper, we advocate explicitly describing kinematics information via velocity and acceleration by proposing a novel loss called joint point smoothness (JPS) loss, which calculates the acceleration of joints to smooth the sudden change in joint velocity. In addition, capturing spatio-temporal dependencies to make feature representations more informative is also one of the obstacles in this task. Therefore, we propose a dual-path network (KSOF) that models the temporal and spatial dependencies from kinematic temporal convolutional network (K-TCN) and spatial graph convolutional networks (S-GCN), respectively. Moreover, we propose a novel multi-scale fusion module named spatio-temporal optimal fusion (SOF) to enhance extraction of the essential correlation and important features at different scales from spatio-temporal coupling features. We evaluate our approach on three standard benchmark datasets, including Human3.6M, CMU-Mocap, and 3DPW datasets. For both short-term and long-term predictions, our method achieves outstanding performance on all these datasets. The code is available at <span><span>https://github.com/qukehua/KSOF</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"161 ","pages":"Article 111206"},"PeriodicalIF":7.5,"publicationDate":"2024-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142759374","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Camera-aware graph multi-domain adaptive learning for unsupervised person re-identification 无监督人再识别的摄像机感知图多域自适应学习
IF 7.5 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-28 DOI: 10.1016/j.patcog.2024.111217
Zhidan Ran, Xiaobo Lu, Xuan Wei, Wei Liu
Recently, unsupervised person re-identification (Re-ID) has gained much attention due to its important practical significance in real-world application scenarios without pairwise labeled data. A key challenge for unsupervised person Re-ID is learning discriminative and robust feature representations under cross-camera scene variation. Contrastive learning approaches treat unsupervised representation learning as a dictionary look-up task. However, existing methods ignore both intra- and inter-camera semantic associations during training. In this paper, we propose a novel unsupervised person Re-ID framework, Camera-Aware Graph Multi-Domain Adaptive Learning (CGMAL), which can conduct multi-domain feature transfer with semantic propagation for learning discriminative domain-invariant representations. Specifically, we treat each camera as a distinct domain and extract image samples from every camera domain to form a mini-batch. A heterogeneous graph is constructed for representing the relationships between all instances in a mini-batch. Then a Graph Convolutional Network (GCN) is employed to fuse the image samples into a unified space and implement promising semantic transfer for providing ideal feature representations. Subsequently, we construct the memory-based non-parametric contrastive loss to train the model. In particular, we design an adversarial training scheme for transferring the knowledge learned by GCN to the feature extractor. Experimental experiments on three benchmarks validate that our proposed approach is superior to the state-of-the-art unsupervised methods.
近年来,无监督人员再识别(Re-ID, unsupervised person - Re-ID)因其在没有数据成对标记的实际应用场景中具有重要的实际意义而受到广泛关注。无监督人再识别的一个关键挑战是学习跨摄像头场景变化下的判别和鲁棒特征表示。对比学习方法将无监督表示学习视为字典查找任务。然而,现有的方法在训练过程中忽略了相机内部和相机之间的语义关联。本文提出了一种新的无监督人身份识别框架——相机感知图多域自适应学习(CGMAL),该框架可以通过语义传播进行多域特征转移,学习判别性域不变表示。具体来说,我们将每个相机视为一个独立的域,并从每个相机域中提取图像样本以形成一个小批量。构建一个异构图来表示小批处理中所有实例之间的关系。然后利用图形卷积网络(GCN)将图像样本融合到一个统一的空间中,并实现有希望的语义转移,以提供理想的特征表示。随后,我们构造了基于记忆的非参数对比损失来训练模型。特别地,我们设计了一种对抗训练方案,用于将GCN学习到的知识转移到特征提取器中。在三个基准上的实验实验验证了我们提出的方法优于最先进的无监督方法。
{"title":"Camera-aware graph multi-domain adaptive learning for unsupervised person re-identification","authors":"Zhidan Ran,&nbsp;Xiaobo Lu,&nbsp;Xuan Wei,&nbsp;Wei Liu","doi":"10.1016/j.patcog.2024.111217","DOIUrl":"10.1016/j.patcog.2024.111217","url":null,"abstract":"<div><div>Recently, unsupervised person re-identification (Re-ID) has gained much attention due to its important practical significance in real-world application scenarios without pairwise labeled data. A key challenge for unsupervised person Re-ID is learning discriminative and robust feature representations under cross-camera scene variation. Contrastive learning approaches treat unsupervised representation learning as a dictionary look-up task. However, existing methods ignore both intra- and inter-camera semantic associations during training. In this paper, we propose a novel unsupervised person Re-ID framework, Camera-Aware Graph Multi-Domain Adaptive Learning (CGMAL), which can conduct multi-domain feature transfer with semantic propagation for learning discriminative domain-invariant representations. Specifically, we treat each camera as a distinct domain and extract image samples from every camera domain to form a mini-batch. A heterogeneous graph is constructed for representing the relationships between all instances in a mini-batch. Then a Graph Convolutional Network (GCN) is employed to fuse the image samples into a unified space and implement promising semantic transfer for providing ideal feature representations. Subsequently, we construct the memory-based non-parametric contrastive loss to train the model. In particular, we design an adversarial training scheme for transferring the knowledge learned by GCN to the feature extractor. Experimental experiments on three benchmarks validate that our proposed approach is superior to the state-of-the-art unsupervised methods.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"161 ","pages":"Article 111217"},"PeriodicalIF":7.5,"publicationDate":"2024-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142759285","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RSANet: Relative-sequence quality assessment network for gait recognition in the wild RSANet:野外步态识别的相对序列质量评估网络
IF 7.5 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-28 DOI: 10.1016/j.patcog.2024.111219
Guozhen Peng , Yunhong Wang , Shaoxiong Zhang , Rui Li , Yuwei Zhao , Annan Li
Gait recognition in the wild has received increasing attention since the gait pattern is hard to disguise and can be captured in a long distance. However, due to occlusions and segmentation errors, low-quality silhouettes are common and inevitable. To mitigate this low-quality problem, some prior arts propose absolute-single quality assessment models. Although these methods obtain a good performance, they only focus on the silhouette quality of a single frame, lacking consideration of the variation state of the entire sequence. In this paper, we propose a Relative-Sequence Quality Assessment Network, named RSANet. It uses the Average Feature Similarity Module (AFSM) to evaluate silhouette quality by calculating the similarity between one silhouette and all other silhouettes in the same silhouette sequence. The silhouette quality is based on the sequence, reflecting a relative quality. Furthermore, RSANet uses Multi-Temporal-Receptive-Field Residual Blocks (MTB) to extend temporal receptive fields without parameter increases. It achieves a Rank-1 accuracy of 75.2% on Gait3D, 81.8% on GREW, and 77.6% on BUAA-Duke-Gait datasets respectively. The code is available at https://github.com/PGZ-Sleepy/RSANet.
野外步态识别由于步态模式难以伪装和远距离捕获而受到越来越多的关注。然而,由于遮挡和分割错误,低质量的轮廓是常见的和不可避免的。为了缓解这种低质量问题,一些现有技术提出了绝对单一质量评估模型。这些方法虽然取得了很好的效果,但它们只关注了单个帧的轮廓质量,而没有考虑到整个序列的变化状态。在本文中,我们提出了一个相对序列质量评估网络,命名为RSANet。它使用平均特征相似度模块(AFSM)通过计算同一轮廓序列中一个轮廓与所有其他轮廓之间的相似度来评估轮廓质量。轮廓质量是基于序列的,反映了相对质量。此外,RSANet在不增加参数的情况下使用多时间接受野残差块(MTB)来扩展时间接受野。该算法在Gait3D、grow和buaa - duke -步态数据集上的Rank-1准确率分别为75.2%、81.8%和77.6%。代码可在https://github.com/PGZ-Sleepy/RSANet上获得。
{"title":"RSANet: Relative-sequence quality assessment network for gait recognition in the wild","authors":"Guozhen Peng ,&nbsp;Yunhong Wang ,&nbsp;Shaoxiong Zhang ,&nbsp;Rui Li ,&nbsp;Yuwei Zhao ,&nbsp;Annan Li","doi":"10.1016/j.patcog.2024.111219","DOIUrl":"10.1016/j.patcog.2024.111219","url":null,"abstract":"<div><div>Gait recognition in the wild has received increasing attention since the gait pattern is hard to disguise and can be captured in a long distance. However, due to occlusions and segmentation errors, low-quality silhouettes are common and inevitable. To mitigate this low-quality problem, some prior arts propose absolute-single quality assessment models. Although these methods obtain a good performance, they only focus on the silhouette quality of a single frame, lacking consideration of the variation state of the entire sequence. In this paper, we propose a Relative-Sequence Quality Assessment Network, named RSANet. It uses the Average Feature Similarity Module (AFSM) to evaluate silhouette quality by calculating the similarity between one silhouette and all other silhouettes in the same silhouette sequence. The silhouette quality is based on the sequence, reflecting a relative quality. Furthermore, RSANet uses Multi-Temporal-Receptive-Field Residual Blocks (MTB) to extend temporal receptive fields without parameter increases. It achieves a Rank-1 accuracy of 75.2% on Gait3D, 81.8% on GREW, and 77.6% on BUAA-Duke-Gait datasets respectively. The code is available at <span><span>https://github.com/PGZ-Sleepy/RSANet</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"161 ","pages":"Article 111219"},"PeriodicalIF":7.5,"publicationDate":"2024-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142759375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unsupervised evaluation for out-of-distribution detection 分布外检测的无监督评价
IF 7.5 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-26 DOI: 10.1016/j.patcog.2024.111212
Yuhang Zhang , Jiani Hu , Dongchao Wen , Weihong Deng
We need to acquire labels for test sets to evaluate the performance of existing out-of-distribution (OOD) detection methods. In real-world deployment, it is laborious to label each new test set as there are various OOD data with different difficulties. However, we need to use different OOD data to evaluate OOD detection methods as their performance varies widely. Thus, we propose evaluating OOD detection methods on unlabeled test sets, which can free us from labeling each new OOD test set. It is a non-trivial task as we do not know which sample is correctly detected without OOD labels, and the evaluation metric like AUROC cannot be calculated. In this paper, we address this important yet untouched task for the first time. Inspired by the bimodal distribution of OOD detection test sets, we propose an unsupervised indicator named Gscore that has a certain relationship with the OOD detection performance; thus, we could use neural networks to learn that relationship to predict OOD detection performance without OOD labels. Through extensive experiments, we validate that there does exist a strong quantitative correlation, which is almost linear, between Gscore and the OOD detection performance. Additionally, we introduce Gbench, a new benchmark consisting of 200 different real-world OOD datasets, to test the performance of Gscore. Our results show that Gscore achieves state-of-the-art performance compared with other unsupervised evaluation methods and generalizes well with different in-distribution (ID)/OOD datasets, OOD detection methods, backbones, and ID:OOD ratios. Furthermore, we conduct analyses on Gbench to study the effects of backbones and ID/OOD datasets on OOD detection performance. The dataset and code will be available.
我们需要获取测试集的标签来评估现有的超分布(OOD)检测方法的性能。在实际部署中,标记每个新的测试集是很费力的,因为存在不同难度的各种OOD数据。然而,我们需要使用不同的OOD数据来评估OOD检测方法,因为它们的性能差异很大。因此,我们提出了在未标记的测试集上评估OOD检测方法,这可以使我们从标记每个新的OOD测试集中解脱出来。这是一项非常重要的任务,因为如果没有OOD标签,我们不知道哪个样本是正确检测的,并且无法计算AUROC等评估指标。在本文中,我们首次解决了这一重要但尚未触及的任务。受OOD检测测试集双峰分布的启发,我们提出了一个与OOD检测性能有一定关系的无监督指标Gscore;因此,我们可以使用神经网络来学习这种关系,从而在没有OOD标签的情况下预测OOD检测性能。通过大量的实验,我们验证了Gscore和OOD检测性能之间确实存在很强的定量相关性,几乎是线性的。此外,我们引入了Gbench,这是一个由200个不同的真实世界OOD数据集组成的新基准,用于测试Gscore的性能。我们的研究结果表明,与其他无监督评估方法相比,Gscore达到了最先进的性能,并且在不同的分布(ID)/OOD数据集、OOD检测方法、骨干和ID:OOD比率下都有很好的推广效果。此外,我们在Gbench上进行了分析,研究了主干和ID/OOD数据集对OOD检测性能的影响。数据集和代码将可用。
{"title":"Unsupervised evaluation for out-of-distribution detection","authors":"Yuhang Zhang ,&nbsp;Jiani Hu ,&nbsp;Dongchao Wen ,&nbsp;Weihong Deng","doi":"10.1016/j.patcog.2024.111212","DOIUrl":"10.1016/j.patcog.2024.111212","url":null,"abstract":"<div><div>We need to acquire labels for test sets to evaluate the performance of existing out-of-distribution (OOD) detection methods. In real-world deployment, it is laborious to label each new test set as there are various OOD data with different difficulties. However, we need to use different OOD data to evaluate OOD detection methods as their performance varies widely. Thus, we propose evaluating OOD detection methods on unlabeled test sets, which can free us from labeling each new OOD test set. It is a non-trivial task as we do not know which sample is correctly detected without OOD labels, and the evaluation metric like AUROC cannot be calculated. In this paper, we address this important yet untouched task for the first time. Inspired by the bimodal distribution of OOD detection test sets, we propose an unsupervised indicator named Gscore that has a certain relationship with the OOD detection performance; thus, we could use neural networks to learn that relationship to predict OOD detection performance without OOD labels. Through extensive experiments, we validate that there does exist a strong quantitative correlation, which is almost linear, between Gscore and the OOD detection performance. Additionally, we introduce Gbench, a new benchmark consisting of 200 different real-world OOD datasets, to test the performance of Gscore. Our results show that Gscore achieves state-of-the-art performance compared with other unsupervised evaluation methods and generalizes well with different in-distribution (ID)/OOD datasets, OOD detection methods, backbones, and ID:OOD ratios. Furthermore, we conduct analyses on Gbench to study the effects of backbones and ID/OOD datasets on OOD detection performance. The dataset and code will be available.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"160 ","pages":"Article 111212"},"PeriodicalIF":7.5,"publicationDate":"2024-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142743134","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing out-of-distribution detection via diversified multi-prototype contrastive learning 通过多元多原型对比学习增强分布外检测
IF 7.5 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-26 DOI: 10.1016/j.patcog.2024.111214
Yulong Jia , Jiaming Li , Ganlong Zhao , Shuangyin Liu , Weijun Sun , Liang Lin , Guanbin Li
Detecting out-of-distribution (OOD) inputs is critical for safely deploying deep neural networks in the open world. Recent distance-based contrastive learning methods demonstrated their effectiveness by learning improved feature representations in the embedding space. However, those methods might lead to an incomplete and ambiguous representation of a class, thereby resulting in the loss of intra-class semantic information. In this work, we propose a novel diversified multi-prototype contrastive learning framework, which preserves the semantic knowledge within each class’s embedding space by introducing multiple fine-grained prototypes for each class. This preserves intrinsic features within the in-distribution data, promoting discrimination against OOD samples. We also devise an activation constraints technique to mitigate the impact of extreme activation values on other dimensions and facilitate the computation of distance-based scores. Extensive experiments on several benchmarks show that our proposed method is effective and beneficial for OOD detection, outperforming previous state-of-the-art methods.
检测非分布(OOD)输入对于在开放环境中安全部署深度神经网络至关重要。最近基于距离的对比学习方法通过学习嵌入空间中改进的特征表示证明了它们的有效性。然而,这些方法可能导致类的不完整和模糊表示,从而导致类内语义信息的丢失。在这项工作中,我们提出了一种新颖的多元化多原型对比学习框架,该框架通过为每个类引入多个细粒度原型来保留每个类嵌入空间内的语义知识。这保留了分布内数据的内在特征,促进了对OOD样本的歧视。我们还设计了一种激活约束技术,以减轻极端激活值对其他维度的影响,并简化基于距离的分数的计算。在几个基准上进行的大量实验表明,我们提出的方法对OOD检测是有效的,并且优于以前的最先进的方法。
{"title":"Enhancing out-of-distribution detection via diversified multi-prototype contrastive learning","authors":"Yulong Jia ,&nbsp;Jiaming Li ,&nbsp;Ganlong Zhao ,&nbsp;Shuangyin Liu ,&nbsp;Weijun Sun ,&nbsp;Liang Lin ,&nbsp;Guanbin Li","doi":"10.1016/j.patcog.2024.111214","DOIUrl":"10.1016/j.patcog.2024.111214","url":null,"abstract":"<div><div>Detecting out-of-distribution (OOD) inputs is critical for safely deploying deep neural networks in the open world. Recent distance-based contrastive learning methods demonstrated their effectiveness by learning improved feature representations in the embedding space. However, those methods might lead to an incomplete and ambiguous representation of a class, thereby resulting in the loss of intra-class semantic information. In this work, we propose a novel diversified multi-prototype contrastive learning framework, which preserves the semantic knowledge within each class’s embedding space by introducing multiple fine-grained prototypes for each class. This preserves intrinsic features within the in-distribution data, promoting discrimination against OOD samples. We also devise an activation constraints technique to mitigate the impact of extreme activation values on other dimensions and facilitate the computation of distance-based scores. Extensive experiments on several benchmarks show that our proposed method is effective and beneficial for OOD detection, outperforming previous state-of-the-art methods.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"161 ","pages":"Article 111214"},"PeriodicalIF":7.5,"publicationDate":"2024-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142759372","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Semantic decomposition and enhancement hashing for deep cross-modal retrieval 深度跨模态检索的语义分解和增强哈希
IF 7.5 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-26 DOI: 10.1016/j.patcog.2024.111225
Lunke Fei , Zhihao He , Wai Keung Wong , Qi Zhu , Shuping Zhao , Jie Wen
Deep hashing has garnered considerable interest and has shown impressive performance in the domain of retrieval. However, the majority of the current hashing techniques rely solely on binary similarity evaluation criteria to assess the semantic relationships between multi-label instances, which presents a challenge in overcoming the feature gap across various modalities. In this paper, we propose semantic decomposition and enhancement hashing (SDEH) by extensively exploring the multi-label semantic information shared by different modalities for cross-modal retrieval. Specifically, we first introduce two independent attention-based feature learning subnetworks to capture the modality-specific features with both global and local details. Subsequently, we exploit the semantic features from multi-label vectors by decomposing the shared semantic information among multi-modal features such that the associations of different modalities can be established. Finally, we jointly learn the common hash code representations of multimodal information under the guidelines of quadruple losses, making the hash codes informative while simultaneously preserving multilevel semantic relationships and feature distribution consistency. Comprehensive experiments on four commonly used multimodal datasets offer strong support for the exceptional effectiveness of our proposed SDEH.
深度哈希已经获得了相当大的兴趣,并在检索领域显示出令人印象深刻的性能。然而,目前大多数哈希技术仅依赖于二元相似性评价标准来评估多标签实例之间的语义关系,这在克服各种模式之间的特征差距方面提出了挑战。在本文中,我们提出了语义分解和增强哈希(SDEH),通过广泛探索不同模态共享的多标签语义信息进行跨模态检索。具体来说,我们首先引入了两个独立的基于注意力的特征学习子网,以捕获具有全局和局部细节的模态特定特征。随后,我们通过分解多模态特征之间共享的语义信息来挖掘多标签向量的语义特征,从而建立不同模态之间的关联。最后,我们在四重损失的指导下共同学习了多模态信息的常见哈希码表示,使哈希码在保持多层语义关系和特征分布一致性的同时具有信息量。在四种常用的多模态数据集上进行的综合实验为我们提出的SDEH的卓越有效性提供了强有力的支持。
{"title":"Semantic decomposition and enhancement hashing for deep cross-modal retrieval","authors":"Lunke Fei ,&nbsp;Zhihao He ,&nbsp;Wai Keung Wong ,&nbsp;Qi Zhu ,&nbsp;Shuping Zhao ,&nbsp;Jie Wen","doi":"10.1016/j.patcog.2024.111225","DOIUrl":"10.1016/j.patcog.2024.111225","url":null,"abstract":"<div><div>Deep hashing has garnered considerable interest and has shown impressive performance in the domain of retrieval. However, the majority of the current hashing techniques rely solely on binary similarity evaluation criteria to assess the semantic relationships between multi-label instances, which presents a challenge in overcoming the feature gap across various modalities. In this paper, we propose semantic decomposition and enhancement hashing (SDEH) by extensively exploring the multi-label semantic information shared by different modalities for cross-modal retrieval. Specifically, we first introduce two independent attention-based feature learning subnetworks to capture the modality-specific features with both global and local details. Subsequently, we exploit the semantic features from multi-label vectors by decomposing the shared semantic information among multi-modal features such that the associations of different modalities can be established. Finally, we jointly learn the common hash code representations of multimodal information under the guidelines of quadruple losses, making the hash codes informative while simultaneously preserving multilevel semantic relationships and feature distribution consistency. Comprehensive experiments on four commonly used multimodal datasets offer strong support for the exceptional effectiveness of our proposed SDEH.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"160 ","pages":"Article 111225"},"PeriodicalIF":7.5,"publicationDate":"2024-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142743133","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
UM-CAM: Uncertainty-weighted multi-resolution class activation maps for weakly-supervised segmentation UM-CAM:弱监督分割的不确定性加权多分辨率类激活图
IF 7.5 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-26 DOI: 10.1016/j.patcog.2024.111204
Jia Fu , Guotai Wang , Tao Lu , Qiang Yue , Tom Vercauteren , Sébastien Ourselin , Shaoting Zhang
Weakly-supervised medical image segmentation methods utilizing image-level labels have gained attention for reducing the annotation cost. They typically use Class Activation Maps (CAM) from a classification network but struggle with incomplete activation regions due to low-resolution localization without detailed boundaries. Differently from most of them that only focus on improving the quality of CAMs, we propose a more unified weakly-supervised segmentation framework with image-level supervision. Firstly, an Uncertainty-weighted Multi-resolution Class Activation Map (UM-CAM) is proposed to generate high-quality pixel-level pseudo-labels. Subsequently, a Geodesic distance-based Seed Expansion (GSE) strategy is introduced to rectify ambiguous boundaries in the UM-CAM by leveraging contextual information. To train a final segmentation model from noisy pseudo-labels, we introduce a Random-View Consensus (RVC) training strategy to suppress unreliable pixel/voxels and encourage consistency between random-view predictions. Extensive experiments on 2D fetal brain segmentation and 3D brain tumor segmentation tasks showed that our method significantly outperforms existing weakly-supervised methods. Code is available at: https://github.com/HiLab-git/UM-CAM.
利用图像级标签的弱监督医学图像分割方法因其降低标注成本而备受关注。他们通常使用来自分类网络的类激活图(CAM),但由于没有详细边界的低分辨率定位,激活区域不完整。与大多数只注重提高图像质量的分割框架不同,我们提出了一个更统一的带有图像级监督的弱监督分割框架。首先,提出了一种不确定加权多分辨率类激活图(UM-CAM)来生成高质量的像素级伪标签;随后,引入了基于测地线距离的种子扩展(GSE)策略,通过利用上下文信息来纠正UM-CAM中的模糊边界。为了从噪声伪标签中训练最终的分割模型,我们引入了随机视图一致性(RVC)训练策略来抑制不可靠的像素/体素,并鼓励随机视图预测之间的一致性。在二维胎儿脑分割和三维脑肿瘤分割任务上的大量实验表明,我们的方法明显优于现有的弱监督方法。代码可从https://github.com/HiLab-git/UM-CAM获得。
{"title":"UM-CAM: Uncertainty-weighted multi-resolution class activation maps for weakly-supervised segmentation","authors":"Jia Fu ,&nbsp;Guotai Wang ,&nbsp;Tao Lu ,&nbsp;Qiang Yue ,&nbsp;Tom Vercauteren ,&nbsp;Sébastien Ourselin ,&nbsp;Shaoting Zhang","doi":"10.1016/j.patcog.2024.111204","DOIUrl":"10.1016/j.patcog.2024.111204","url":null,"abstract":"<div><div>Weakly-supervised medical image segmentation methods utilizing image-level labels have gained attention for reducing the annotation cost. They typically use Class Activation Maps (CAM) from a classification network but struggle with incomplete activation regions due to low-resolution localization without detailed boundaries. Differently from most of them that only focus on improving the quality of CAMs, we propose a more unified weakly-supervised segmentation framework with image-level supervision. Firstly, an Uncertainty-weighted Multi-resolution Class Activation Map (UM-CAM) is proposed to generate high-quality pixel-level pseudo-labels. Subsequently, a Geodesic distance-based Seed Expansion (GSE) strategy is introduced to rectify ambiguous boundaries in the UM-CAM by leveraging contextual information. To train a final segmentation model from noisy pseudo-labels, we introduce a Random-View Consensus (RVC) training strategy to suppress unreliable pixel/voxels and encourage consistency between random-view predictions. Extensive experiments on 2D fetal brain segmentation and 3D brain tumor segmentation tasks showed that our method significantly outperforms existing weakly-supervised methods. Code is available at: <span><span>https://github.com/HiLab-git/UM-CAM</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"160 ","pages":"Article 111204"},"PeriodicalIF":7.5,"publicationDate":"2024-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142743156","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ClickTrack: Towards real-time interactive single object tracking ClickTrack:走向实时交互单对象跟踪
IF 7.5 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-26 DOI: 10.1016/j.patcog.2024.111211
Kuiran Wang , Xuehui Yu , Wenwen Yu , Guorong Li , Xiangyuan Lan , Qixiang Ye , Jianbin Jiao , Zhenjun Han
Single object tracking (SOT) relies on precise object bounding box initialization. In this paper, we reconsidered the deficiencies in the current approaches to initializing single object trackers and propose a new paradigm for single object tracking algorithms, ClickTrack, a new paradigm using clicking interaction for real-time scenarios. Moreover, click as an input type inherently lack hierarchical information. To address ambiguity in certain special scenarios, we designed the Guided Click Refiner (GCR), which accepts point and optional textual information as inputs, transforming the point into the bounding box expected by the operator. The bounding box will be used as input of single object trackers. Experiments on LaSOT and GOT-10k benchmarks show that tracker combined with GCR achieves stable performance in real-time interactive scenarios. Furthermore, we explored the integration of GCR into the Segment Anything model (SAM), significantly reducing ambiguity issues when SAM receives point inputs.
单目标跟踪(SOT)依赖于精确的目标边界框初始化。在本文中,我们重新考虑了当前初始化单目标跟踪器方法的不足,并提出了一种新的单目标跟踪算法范式,ClickTrack,一种在实时场景中使用点击交互的新范式。此外,单击作为输入类型本身缺乏层次信息。为了解决某些特殊场景中的歧义,我们设计了Guided Click Refiner (GCR),它接受点和可选文本信息作为输入,将点转换为操作符期望的边界框。边界框将用作单目标跟踪器的输入。在LaSOT和GOT-10k基准测试上的实验表明,跟踪器结合GCR在实时交互场景下具有稳定的性能。此外,我们探索了将GCR集成到任何片段模型(SAM)中,显著减少SAM接收点输入时的歧义问题。
{"title":"ClickTrack: Towards real-time interactive single object tracking","authors":"Kuiran Wang ,&nbsp;Xuehui Yu ,&nbsp;Wenwen Yu ,&nbsp;Guorong Li ,&nbsp;Xiangyuan Lan ,&nbsp;Qixiang Ye ,&nbsp;Jianbin Jiao ,&nbsp;Zhenjun Han","doi":"10.1016/j.patcog.2024.111211","DOIUrl":"10.1016/j.patcog.2024.111211","url":null,"abstract":"<div><div>Single object tracking (SOT) relies on precise object bounding box initialization. In this paper, we reconsidered the deficiencies in the current approaches to initializing single object trackers and propose a new paradigm for single object tracking algorithms, ClickTrack, a new paradigm using clicking interaction for real-time scenarios. Moreover, click as an input type inherently lack hierarchical information. To address ambiguity in certain special scenarios, we designed the Guided Click Refiner (GCR), which accepts point and optional textual information as inputs, transforming the point into the bounding box expected by the operator. The bounding box will be used as input of single object trackers. Experiments on LaSOT and GOT-10k benchmarks show that tracker combined with GCR achieves stable performance in real-time interactive scenarios. Furthermore, we explored the integration of GCR into the Segment Anything model (SAM), significantly reducing ambiguity issues when SAM receives point inputs.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"161 ","pages":"Article 111211"},"PeriodicalIF":7.5,"publicationDate":"2024-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142759286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SEMACOL: Semantic-enhanced multi-scale approach for text-guided grayscale image colorization SEMACOL:语义增强的多尺度文本引导灰度图像着色方法
IF 7.5 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-23 DOI: 10.1016/j.patcog.2024.111203
Chaochao Niu, Ming Tao, Bing-Kun Bao
High-quality colorization of grayscale images using text descriptions presents a significant challenge, especially in accurately coloring small objects. The existing methods have two major flaws. First, text descriptions typically omit size information of objects, resulting in text features that often lack semantic information reflecting object sizes. Second, these methods identify coloring areas by relying solely on low-resolution visual features from the Unet encoder and fail to leverage the fine-grained information provided by high-resolution visual features effectively. To address these issues, we introduce the Semantic-Enhanced Multi-scale Approach for Text-Guided Grayscale Image Colorization (SEMACOL). We first introduce a Cross-Modal Text Augmentation module that incorporates grayscale images into text features, which enables accurate perception of object sizes in text descriptions. Subsequently, we propose a Multi-scale Content Location module, which utilizes multi-scale features to precisely identify coloring areas within grayscale images. Meanwhile, we incorporate a Text-Influenced Colorization Adjustment module to effectively adjust colorization based on text descriptions. Finally, we implement a Dynamic Feature Fusion Strategy, which dynamically refines outputs from both the Multi-scale Content Location and Text-Influenced Colorization Adjustment modules, ensuring a coherent colorization process. SEMACOL demonstrates remarkable performance improvements over existing state-of-the-art methods on public datasets. Specifically, SEMACOL achieves a PSNR of 25.695, SSIM of 0.92240, LPIPS of 0.156, and FID of 17.54, surpassing the previous best results (PSNR: 25.511, SSIM: 0.92104, LPIPS: 0.157, FID: 26.93). The code will be available at https://github.com/ChchNiu/SEMACOL.
使用文本描述的灰度图像的高质量着色提出了重大挑战,特别是在准确地为小物体着色方面。现有的方法有两个主要缺陷。首先,文本描述通常会忽略对象的大小信息,导致文本特征往往缺乏反映对象大小的语义信息。其次,这些方法仅仅依靠Unet编码器的低分辨率视觉特征来识别着色区域,而不能有效地利用高分辨率视觉特征提供的细粒度信息。为了解决这些问题,我们引入了语义增强的文本引导灰度图像着色多尺度方法(SEMACOL)。我们首先引入了一个跨模态文本增强模块,该模块将灰度图像整合到文本特征中,从而能够准确感知文本描述中的对象大小。随后,我们提出了一种多尺度内容定位模块,该模块利用多尺度特征来精确识别灰度图像中的着色区域。同时,我们加入了文本影响着色调整模块,可以根据文本描述有效地调整着色。最后,我们实现了一种动态特征融合策略,该策略动态地优化了多尺度内容位置和文本影响着色调整模块的输出,确保了一个连贯的着色过程。SEMACOL在公共数据集上比现有的最先进的方法表现出显著的性能改进。具体而言,SEMACOL的PSNR为25.695,SSIM为0.92240,LPIPS为0.156,FID为17.54,超过了之前的最佳结果(PSNR: 25.511, SSIM: 0.92104, LPIPS: 0.157, FID: 26.93)。代码可在https://github.com/ChchNiu/SEMACOL上获得。
{"title":"SEMACOL: Semantic-enhanced multi-scale approach for text-guided grayscale image colorization","authors":"Chaochao Niu,&nbsp;Ming Tao,&nbsp;Bing-Kun Bao","doi":"10.1016/j.patcog.2024.111203","DOIUrl":"10.1016/j.patcog.2024.111203","url":null,"abstract":"<div><div>High-quality colorization of grayscale images using text descriptions presents a significant challenge, especially in accurately coloring small objects. The existing methods have two major flaws. First, text descriptions typically omit size information of objects, resulting in text features that often lack semantic information reflecting object sizes. Second, these methods identify coloring areas by relying solely on low-resolution visual features from the Unet encoder and fail to leverage the fine-grained information provided by high-resolution visual features effectively. To address these issues, we introduce the Semantic-Enhanced Multi-scale Approach for Text-Guided Grayscale Image Colorization (SEMACOL). We first introduce a Cross-Modal Text Augmentation module that incorporates grayscale images into text features, which enables accurate perception of object sizes in text descriptions. Subsequently, we propose a Multi-scale Content Location module, which utilizes multi-scale features to precisely identify coloring areas within grayscale images. Meanwhile, we incorporate a Text-Influenced Colorization Adjustment module to effectively adjust colorization based on text descriptions. Finally, we implement a Dynamic Feature Fusion Strategy, which dynamically refines outputs from both the Multi-scale Content Location and Text-Influenced Colorization Adjustment modules, ensuring a coherent colorization process. SEMACOL demonstrates remarkable performance improvements over existing state-of-the-art methods on public datasets. Specifically, SEMACOL achieves a PSNR of 25.695, SSIM of 0.92240, LPIPS of 0.156, and FID of 17.54, surpassing the previous best results (PSNR: 25.511, SSIM: 0.92104, LPIPS: 0.157, FID: 26.93). The code will be available at <span><span>https://github.com/ChchNiu/SEMACOL</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"160 ","pages":"Article 111203"},"PeriodicalIF":7.5,"publicationDate":"2024-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142743132","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Diffusion-based framework for weakly-supervised temporal action localization 基于扩散的弱监督时间动作定位框架
IF 7.5 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-23 DOI: 10.1016/j.patcog.2024.111207
Yuanbing Zou , Qingjie Zhao , Prodip Kumar Sarker , Shanshan Li , Lei Wang , Wangwang Liu
Weakly supervised temporal action localization aims to localize action instances with only video-level supervision. Due to the absence of frame-level annotation supervision, how effectively separate action snippets and backgrounds from semantically ambiguous features becomes an arduous challenge for this task. To address this issue from a generative modeling perspective, we propose a novel diffusion-based network with two stages. Firstly, we design a local masking mechanism module to learn the local semantic information and generate binary masks at the early stage, which (1) are used to perform action-background separation and (2) serve as pseudo-ground truth required by the diffusion module. Then, we propose a diffusion module to generate high-quality action predictions under the pseudo-ground truth supervision in the second stage. In addition, we further optimize the new-refining operation in the local masking module to improve the operation efficiency. The experimental results demonstrate that the proposed method achieves a promising performance on the publicly available mainstream datasets THUMOS14 and ActivityNet. The code is available at https://github.com/Rlab123/action_diff.
弱监督时态动作定位旨在仅通过视频级别的监督来定位动作实例。由于缺乏框架级标注监督,如何有效地将动作片段和背景从语义模糊的特征中分离出来成为该任务的一个艰巨挑战。为了从生成建模的角度解决这个问题,我们提出了一个新的基于扩散的网络,分为两个阶段。首先,我们设计了局部掩蔽机制模块来学习局部语义信息,并在早期生成二值掩码(1)用于动作-背景分离(2)作为扩散模块所需的伪地真值。然后,在第二阶段,我们提出了一个扩散模块来生成伪地面真值监督下的高质量动作预测。此外,我们进一步优化了局部屏蔽模块中的新精炼操作,以提高操作效率。实验结果表明,该方法在公开可用的主流数据集THUMOS14和ActivityNet上取得了良好的性能。代码可在https://github.com/Rlab123/action_diff上获得。
{"title":"Diffusion-based framework for weakly-supervised temporal action localization","authors":"Yuanbing Zou ,&nbsp;Qingjie Zhao ,&nbsp;Prodip Kumar Sarker ,&nbsp;Shanshan Li ,&nbsp;Lei Wang ,&nbsp;Wangwang Liu","doi":"10.1016/j.patcog.2024.111207","DOIUrl":"10.1016/j.patcog.2024.111207","url":null,"abstract":"<div><div>Weakly supervised temporal action localization aims to localize action instances with only video-level supervision. Due to the absence of frame-level annotation supervision, how effectively separate action snippets and backgrounds from semantically ambiguous features becomes an arduous challenge for this task. To address this issue from a generative modeling perspective, we propose a novel diffusion-based network with two stages. Firstly, we design a local masking mechanism module to learn the local semantic information and generate binary masks at the early stage, which (1) are used to perform action-background separation and (2) serve as pseudo-ground truth required by the diffusion module. Then, we propose a diffusion module to generate high-quality action predictions under the pseudo-ground truth supervision in the second stage. In addition, we further optimize the new-refining operation in the local masking module to improve the operation efficiency. The experimental results demonstrate that the proposed method achieves a promising performance on the publicly available mainstream datasets THUMOS14 and ActivityNet. The code is available at <span><span>https://github.com/Rlab123/action_diff</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"160 ","pages":"Article 111207"},"PeriodicalIF":7.5,"publicationDate":"2024-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142743131","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Pattern Recognition
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1