首页 > 最新文献

Pattern Recognition最新文献

英文 中文
CEA-Net: A multi-modal model for corn disease classification with dynamic fusion and cross-layer connection mechanism CEA-Net:一个具有动态融合和跨层连接机制的玉米病害分类多模态模型
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-02 DOI: 10.1016/j.patcog.2025.112788
Haoyang Wang, Guoxiong Zhou, Guiyun Chen
Corn is one of the most widely cultivated crops globally, yet it remains highly susceptible to a variety of diseases. With the rapid advancement of deep learning, image-based methods for corn disease classification have emerged and achieved promising results. However, many existing approaches still face challenges such as reliance on single-source information and limited feature extraction capacity. To address these issues, this paper proposes a multi-modal model named CEA-Net. First, we introduce a Cross-layer Connection Model (CCM) for image processing, which integrates multi-level wavelet blocks, VMamba, and Transformer components through a cross-layer connectivity mechanism. This design enhances spatial information reorganization and facilitates efficient feature extraction and reuse within the visual backbone network. Second, we propose an Efficient Dynamic Attention Fusion (EDAF) module for multi-modal feature fusion. EDAF dynamically modulates the contribution of each modality, emphasizing dominant sources while efficiently enhancing the representational capability of feature maps. Finally, we introduce Adaptive Adversarial Cross-Entropy Meta-learning (AACEM) for model pre-training. By combining meta-learning with sharpness-aware minimization and utilizing adaptive adversarial cross-entropy loss, AACEM improves both generalization and overall performance. Experimental results show that CEA-Net achieves an accuracy of 97.40%, outperforming networks such as EfficientViM and D2R by margins of 0.81%, 0.56%, 0.67%, and 0.55% across various metrics, demonstrating its significant practical value in corn disease management. Our code and dataset are available at: https://github.com/yiyuynanodesu/CEA-Net.
玉米是全球种植最广泛的作物之一,但它仍然极易受到各种疾病的影响。随着深度学习技术的快速发展,基于图像的玉米病害分类方法应运而生,并取得了良好的效果。然而,许多现有的方法仍然面临着依赖单源信息和特征提取能力有限等挑战。为了解决这些问题,本文提出了一个多模态模型CEA-Net。首先,我们介绍了一种用于图像处理的跨层连接模型(CCM),该模型通过跨层连接机制集成了多级小波块、vamba和Transformer组件。该设计增强了空间信息的重组,促进了视觉骨干网内特征的高效提取和重用。其次,我们提出了一个高效动态注意力融合(EDAF)模块用于多模态特征融合。EDAF动态调节每种模态的贡献,强调优势源,同时有效地增强特征映射的表示能力。最后,我们引入自适应对抗交叉熵元学习(AACEM)进行模型预训练。通过将元学习与锐度感知最小化相结合,并利用自适应对抗交叉熵损失,AACEM提高了泛化和整体性能。实验结果表明,CEA-Net的准确率为97.40%,在各指标上分别优于EfficientViM和D2R等网络,准确率分别为0.81%、0.56%、0.67%和0.55%,在玉米病害管理中具有重要的实用价值。我们的代码和数据集可在:https://github.com/yiyuynanodesu/CEA-Net。
{"title":"CEA-Net: A multi-modal model for corn disease classification with dynamic fusion and cross-layer connection mechanism","authors":"Haoyang Wang,&nbsp;Guoxiong Zhou,&nbsp;Guiyun Chen","doi":"10.1016/j.patcog.2025.112788","DOIUrl":"10.1016/j.patcog.2025.112788","url":null,"abstract":"<div><div>Corn is one of the most widely cultivated crops globally, yet it remains highly susceptible to a variety of diseases. With the rapid advancement of deep learning, image-based methods for corn disease classification have emerged and achieved promising results. However, many existing approaches still face challenges such as reliance on single-source information and limited feature extraction capacity. To address these issues, this paper proposes a multi-modal model named CEA-Net. First, we introduce a Cross-layer Connection Model (CCM) for image processing, which integrates multi-level wavelet blocks, VMamba, and Transformer components through a cross-layer connectivity mechanism. This design enhances spatial information reorganization and facilitates efficient feature extraction and reuse within the visual backbone network. Second, we propose an Efficient Dynamic Attention Fusion (EDAF) module for multi-modal feature fusion. EDAF dynamically modulates the contribution of each modality, emphasizing dominant sources while efficiently enhancing the representational capability of feature maps. Finally, we introduce Adaptive Adversarial Cross-Entropy Meta-learning (AACEM) for model pre-training. By combining meta-learning with sharpness-aware minimization and utilizing adaptive adversarial cross-entropy loss, AACEM improves both generalization and overall performance. Experimental results show that CEA-Net achieves an accuracy of 97.40%, outperforming networks such as EfficientViM and D2R by margins of 0.81%, 0.56%, 0.67%, and 0.55% across various metrics, demonstrating its significant practical value in corn disease management. Our code and dataset are available at: <span><span>https://github.com/yiyuynanodesu/CEA-Net</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"173 ","pages":"Article 112788"},"PeriodicalIF":7.6,"publicationDate":"2025-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145738047","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Intra-modal consistency for image-text retrieval through soft-label distillation 基于软标签蒸馏的图像文本检索的模态一致性
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-02 DOI: 10.1016/j.patcog.2025.112817
Da Chen , Yangtao Wang , Yanzhao Xie , Siyuan Chen , Weilong Peng , Maobin Tang , Meie Fang , C. L. Philip Chen , Ping Li , Wensheng Zhang
Image-text retrieval (ITR) plays a crucial role in measuring the semantic association between different modalities. Most existing ITR efforts focus solely on cross-modal similarity but overlook the equally important intra-modal similarity (i.e., similarity between images and similarity between texts), leading to erroneous image-text matches. To address this issue, we propose Intra-modal Consistency for ITR through Soft-label Distillation (termed as ICSD), which cleverly leverages the intra-modal similarity (including encoder-derived intra-modal similarity and teacher-derived intra-modal similarity) to guide the cross-modal similarity. Firstly, during the online learning process, we (i) directly obtain the encoder-derived intra-modal similarity from image features generated from the image encoder and text features generated from the text encoder, and (ii) measure the cross-modal similarity based on those features from our well-designed Proximity Discrimination Module (PDM), which highly distinguishes the relationships between different features. During the offline calculation process, we pre-calculate the teacher-derived intra-modal similarity from those features generated by an image teacher model and text teacher model. Secondly, we devise Intra-modal Similarity Fusion (ISF) to organically combine both encoder-derived intra-modal similarity and teacher-derived intra-modal similarity to guide the learning of cross-modal similarity, developing the intra-modal guidance loss by means of knowledge distillation. Finally, we combine the traditional contrastive loss to jointly optimize the image-text matching process. Our approach is plug-and-play and can be easily applied to existing ITR models without changing their original architectures. Extensive experiments on ITR models and datasets verify our method can achieve much higher performance with lower model complexity than the state-of-the-art (SOTA) approaches. For example, we respectively outperform existing baselines (e.g., CLIPViTB/32 and CLIPViTL/14) by over 18 % and 36 % in RSUM on MSCOCO. The code of this project is available at GitHub: https://github.com/chd1516/ICSD.
图像-文本检索(ITR)在测量不同模态之间的语义关联中起着至关重要的作用。大多数现有的ITR工作只关注跨模态相似性,而忽略了同样重要的模态内相似性(即图像之间的相似性和文本之间的相似性),导致错误的图像-文本匹配。为了解决这个问题,我们通过软标签蒸馏(称为ICSD)提出了ITR的模态内一致性,它巧妙地利用了模态内相似性(包括编码器衍生的模态内相似性和教师衍生的模态内相似性)来指导跨模态相似性。首先,在在线学习过程中,我们(i)直接从图像编码器生成的图像特征和文本编码器生成的文本特征中获得编码器衍生的模态内相似度,(ii)基于我们设计的接近判别模块(PDM)测量这些特征的跨模态相似度,该模块高度区分了不同特征之间的关系。在离线计算过程中,我们从图像教师模型和文本教师模型生成的特征中预先计算出教师衍生的模态内相似性。其次,我们设计了模态内相似度融合(ISF),将编码器导出的模态内相似度和教师导出的模态内相似度有机地结合起来,指导跨模态相似度的学习,利用知识蒸馏的方法消除模态内指导损失。最后,结合传统的对比损失,共同优化图像-文本匹配过程。我们的方法是即插即用,可以很容易地应用于现有的ITR模型,而无需更改其原始架构。在ITR模型和数据集上的大量实验验证了我们的方法可以比最先进的(SOTA)方法以更低的模型复杂度实现更高的性能。例如,我们在MSCOCO上的RSUM分别比现有基线(例如CLIPViT - B/32和CLIPViT - L/14)高出18%和36%。这个项目的代码可以在GitHub上找到:https://github.com/chd1516/ICSD。
{"title":"Intra-modal consistency for image-text retrieval through soft-label distillation","authors":"Da Chen ,&nbsp;Yangtao Wang ,&nbsp;Yanzhao Xie ,&nbsp;Siyuan Chen ,&nbsp;Weilong Peng ,&nbsp;Maobin Tang ,&nbsp;Meie Fang ,&nbsp;C. L. Philip Chen ,&nbsp;Ping Li ,&nbsp;Wensheng Zhang","doi":"10.1016/j.patcog.2025.112817","DOIUrl":"10.1016/j.patcog.2025.112817","url":null,"abstract":"<div><div>Image-text retrieval (ITR) plays a crucial role in measuring the semantic association between different modalities. Most existing ITR efforts focus solely on cross-modal similarity but overlook the equally important intra-modal similarity (i.e., similarity between images and similarity between texts), leading to erroneous image-text matches. To address this issue, we propose <u><strong>I</strong></u>ntra-modal <u><strong>C</strong></u>onsistency for ITR through <u><strong>S</strong></u>oft-label <u><strong>D</strong></u>istillation (termed as ICSD), which cleverly leverages the intra-modal similarity (including encoder-derived intra-modal similarity and teacher-derived intra-modal similarity) to guide the cross-modal similarity. Firstly, during the online learning process, we (i) directly obtain the encoder-derived intra-modal similarity from image features generated from the image encoder and text features generated from the text encoder, and (ii) measure the cross-modal similarity based on those features from our well-designed Proximity Discrimination Module (PDM), which highly distinguishes the relationships between different features. During the offline calculation process, we pre-calculate the teacher-derived intra-modal similarity from those features generated by an image teacher model and text teacher model. Secondly, we devise Intra-modal Similarity Fusion (ISF) to organically combine both encoder-derived intra-modal similarity and teacher-derived intra-modal similarity to guide the learning of cross-modal similarity, developing the intra-modal guidance loss by means of knowledge distillation. Finally, we combine the traditional contrastive loss to jointly optimize the image-text matching process. Our approach is plug-and-play and can be easily applied to existing ITR models without changing their original architectures. Extensive experiments on ITR models and datasets verify our method can achieve much higher performance with lower model complexity than the state-of-the-art (SOTA) approaches. For example, we respectively outperform existing baselines (e.g., CLIP<span><math><msub><mrow></mrow><mrow><mi>V</mi><mi>i</mi><mi>T</mi><mo>−</mo><mi>B</mi><mo>/</mo><mn>32</mn></mrow></msub></math></span> and CLIP<span><math><msub><mrow></mrow><mrow><mi>V</mi><mi>i</mi><mi>T</mi><mo>−</mo><mi>L</mi><mo>/</mo><mn>14</mn></mrow></msub></math></span>) by over 18 % and 36 % in RSUM on MSCOCO. The code of this project is available at GitHub: <span><span>https://github.com/chd1516/ICSD</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"173 ","pages":"Article 112817"},"PeriodicalIF":7.6,"publicationDate":"2025-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145685567","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep semi-supervised relation preserving learning model 深度半监督关系保持学习模型
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-02 DOI: 10.1016/j.patcog.2025.112793
Chenxi Tian, Lingling Li, Xu Liu, Licheng Jiao, Fang Liu, Shuyuan Yang
Feature learning models aim to mine more useful features to complete downstream tasks more effectively. However, many algorithms have problems such as instability. In this paper, a deep semi-supervised relation preserving learning model (DSRPL) is proposed by embedding adaptive graph learning and category information to solve these problems. First, an autoencoder is used for feature learning to obtain nonlinear deep features. Then, the structural information of the original sample space is retained through adaptive graph learning to achieve the relationship between deep features and original data. Finally, a representation layer is embedded in the network, which takes advantage of semi-supervised learning while also utilizing category information to increase the compactness of samples within the same category. By this way, feature extraction and classification are integrated into one model for alternating learning can obtain more effective results. Compared with latest methods, DSRPL achieves more stable and more effective results.
特征学习模型旨在挖掘更多有用的特征,以更有效地完成下游任务。然而,许多算法存在不稳定性等问题。本文通过嵌入自适应图学习和类别信息,提出了一种深度半监督关系保持学习模型(DSRPL)。首先,采用自编码器进行特征学习,获得非线性深度特征;然后,通过自适应图学习保留原始样本空间的结构信息,实现深层特征与原始数据之间的关系。最后,在网络中嵌入一个表示层,该表示层利用了半监督学习的优势,同时也利用了类别信息来增加同一类别内样本的紧凑性。这样将特征提取和分类集成到一个模型中,交替学习可以获得更有效的结果。与最新的方法相比,DSRPL获得了更稳定、更有效的结果。
{"title":"Deep semi-supervised relation preserving learning model","authors":"Chenxi Tian,&nbsp;Lingling Li,&nbsp;Xu Liu,&nbsp;Licheng Jiao,&nbsp;Fang Liu,&nbsp;Shuyuan Yang","doi":"10.1016/j.patcog.2025.112793","DOIUrl":"10.1016/j.patcog.2025.112793","url":null,"abstract":"<div><div>Feature learning models aim to mine more useful features to complete downstream tasks more effectively. However, many algorithms have problems such as instability. In this paper, a deep semi-supervised relation preserving learning model (DSRPL) is proposed by embedding adaptive graph learning and category information to solve these problems. First, an autoencoder is used for feature learning to obtain nonlinear deep features. Then, the structural information of the original sample space is retained through adaptive graph learning to achieve the relationship between deep features and original data. Finally, a representation layer is embedded in the network, which takes advantage of semi-supervised learning while also utilizing category information to increase the compactness of samples within the same category. By this way, feature extraction and classification are integrated into one model for alternating learning can obtain more effective results. Compared with latest methods, DSRPL achieves more stable and more effective results.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"173 ","pages":"Article 112793"},"PeriodicalIF":7.6,"publicationDate":"2025-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145738053","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cross-scale adaptive transformer with hierarchical feature synergy for aerial small object detection 基于层次特征协同的航空小目标检测跨尺度自适应变压器
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-02 DOI: 10.1016/j.patcog.2025.112822
Wenke Zhang, Mengmeng Liao
Small object detection has long been a challenging task in computer vision due to limited pixel representation, extremely small scales, and complex scene variations. These challenges are particularly pronounced in high-resolution aerial imagery, where small objects refer to targets that appear as small-scale in images due to long shooting distances, characterized by limited pixel coverage and sparse feature representation. To address these issues, this paper proposes a novel object detection framework based on a Cross-Scale Adaptive Transformer and Hierarchical Feature Synergy. The framework introduces a Cross-Scale Adaptive Transformer module (CST) to dynamically capture multi-scale features in horizontal and vertical directions. Simultaneously, a Hierarchical Feature Synergy module (HFS) is designed to integrate low-, mid-, and high-level features, thereby enhancing semantic consistency and spatial detail preservation. Furthermore, we develop a novel loss function optimized for small object detection in aerial scenes, where small objects are caused by long shooting distances, effectively improving classification and localization accuracy. Extensive experiments on public datasets, including AI-TOD, VisDrone2019, and NWPU-VHR10, demonstrate that the proposed method significantly outperforms existing approaches in accuracy and efficiency. This work provides a new solution for practical aerial image analysis applications.
由于像素表示有限、尺度极小和场景变化复杂,小目标检测一直是计算机视觉领域的一个具有挑战性的任务。这些挑战在高分辨率航空图像中尤其明显,其中小物体指的是由于远距离拍摄而在图像中显得很小的目标,其特征是有限的像素覆盖和稀疏的特征表示。为了解决这些问题,本文提出了一种基于跨尺度自适应变压器和层次特征协同的目标检测框架。该框架引入了一个跨尺度自适应变压器模块(CST)来动态捕获水平和垂直方向的多尺度特征。同时,设计了层次化特征协同模块(HFS),整合低、中、高层特征,增强语义一致性和空间细节保存。此外,我们开发了一种新的损失函数,用于航拍场景中由于拍摄距离较远而产生的小目标检测,有效提高了分类和定位精度。在包括AI-TOD、VisDrone2019和NWPU-VHR10在内的公共数据集上进行的大量实验表明,该方法在准确性和效率方面显著优于现有方法。该工作为实际航空图像分析应用提供了一种新的解决方案。
{"title":"Cross-scale adaptive transformer with hierarchical feature synergy for aerial small object detection","authors":"Wenke Zhang,&nbsp;Mengmeng Liao","doi":"10.1016/j.patcog.2025.112822","DOIUrl":"10.1016/j.patcog.2025.112822","url":null,"abstract":"<div><div>Small object detection has long been a challenging task in computer vision due to limited pixel representation, extremely small scales, and complex scene variations. These challenges are particularly pronounced in high-resolution aerial imagery, where small objects refer to targets that appear as small-scale in images due to long shooting distances, characterized by limited pixel coverage and sparse feature representation. To address these issues, this paper proposes a novel object detection framework based on a Cross-Scale Adaptive Transformer and Hierarchical Feature Synergy. The framework introduces a Cross-Scale Adaptive Transformer module (CST) to dynamically capture multi-scale features in horizontal and vertical directions. Simultaneously, a Hierarchical Feature Synergy module (HFS) is designed to integrate low-, mid-, and high-level features, thereby enhancing semantic consistency and spatial detail preservation. Furthermore, we develop a novel loss function optimized for small object detection in aerial scenes, where small objects are caused by long shooting distances, effectively improving classification and localization accuracy. Extensive experiments on public datasets, including AI-TOD, VisDrone2019, and NWPU-VHR10, demonstrate that the proposed method significantly outperforms existing approaches in accuracy and efficiency. This work provides a new solution for practical aerial image analysis applications.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"173 ","pages":"Article 112822"},"PeriodicalIF":7.6,"publicationDate":"2025-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145685583","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Structure and sensitivity in 3D human pose similarity quantification and estimation 三维人体姿态相似性量化与估计的结构与灵敏度
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-02 DOI: 10.1016/j.patcog.2025.112805
Kyoungoh Lee , Jungwoo Huh , Jiwoo Kang , Sanghoon Lee
Recent advancements in deep learning have improved quantitative accuracy in 3D human pose estimation, but the estimated poses occasionally suffer from visual defects such as joint tremors and protrusions. While existing 3D pose similarity metrics and estimation models managed to reduce visual defects by addressing the structure of human poses, they still struggle in scenarios where visually sensitive joints are prevalent, particularly in cases of self-occlusion. In this paper, we identify these visually sensitive joints and demonstrate the significance of explicitly considering structure and sensitivity in the problem of 3D human pose estimation. Building upon the successful consideration of human pose structure, we first propose a new enhanced pose similarity metric PSIM+, which models sensitivity similarity to further capture human perception and focus on visual defects. Furthermore, we introduce a new 3D pose estimation model Dual Graph-based Convolutional Neural Networks (DG-CNN), which reconstructs 3D poses by focusing on the spatio-temporal correlation of the skeletal structure and actively controlling visually sensitive joints. By incorporating a novel similarity loss function, our model can implicitly model the structure and sensitivity of human poses through its architecture and explicitly through direct supervision. Our model not only improves the accuracy of the estimated pose but also increases the perceptual quality as evaluated by PSIM+, verifying the significance of structure and sensitivity awareness. Through rigorous benchmarking, we demonstrate that our metric and estimation model achieve the highest correlation with user scores and perform best in situations where visually sensitive joints are prevalent.
深度学习的最新进展提高了3D人体姿势估计的定量准确性,但估计的姿势偶尔会出现视觉缺陷,如关节震颤和突出。虽然现有的3D姿势相似度量和估计模型通过解决人体姿势的结构来减少视觉缺陷,但它们仍然在视觉敏感关节普遍存在的情况下挣扎,特别是在自我遮挡的情况下。在本文中,我们识别了这些视觉敏感关节,并论证了在三维人体姿态估计问题中明确考虑结构和灵敏度的重要性。在成功考虑人体姿态结构的基础上,我们首先提出了一种新的增强姿态相似度指标PSIM+,该指标对灵敏度相似度进行建模,以进一步捕捉人类感知并关注视觉缺陷。此外,我们引入了一种新的三维姿态估计模型-基于双图的卷积神经网络(Dual Graph-based Convolutional Neural Networks, DG-CNN),该模型通过关注骨骼结构的时空相关性和主动控制视觉敏感关节来重建三维姿态。通过结合一个新的相似损失函数,我们的模型可以通过其结构隐式地模拟人体姿势的结构和敏感性,并通过直接监督显式地模拟。我们的模型不仅提高了姿态估计的准确性,而且提高了PSIM+评估的感知质量,验证了结构和灵敏度意识的重要性。通过严格的基准测试,我们证明了我们的度量和估计模型与用户分数的相关性最高,并且在视觉敏感关节普遍存在的情况下表现最佳。
{"title":"Structure and sensitivity in 3D human pose similarity quantification and estimation","authors":"Kyoungoh Lee ,&nbsp;Jungwoo Huh ,&nbsp;Jiwoo Kang ,&nbsp;Sanghoon Lee","doi":"10.1016/j.patcog.2025.112805","DOIUrl":"10.1016/j.patcog.2025.112805","url":null,"abstract":"<div><div>Recent advancements in deep learning have improved quantitative accuracy in 3D human pose estimation, but the estimated poses occasionally suffer from visual defects such as joint tremors and protrusions. While existing 3D pose similarity metrics and estimation models managed to reduce visual defects by addressing the structure of human poses, they still struggle in scenarios where visually sensitive joints are prevalent, particularly in cases of self-occlusion. In this paper, we identify these visually sensitive joints and demonstrate the significance of explicitly considering structure and sensitivity in the problem of 3D human pose estimation. Building upon the successful consideration of human pose structure, we first propose a new enhanced pose similarity metric PSIM<span><math><msup><mrow></mrow><mo>+</mo></msup></math></span>, which models sensitivity similarity to further capture human perception and focus on visual defects. Furthermore, we introduce a new 3D pose estimation model Dual Graph-based Convolutional Neural Networks (DG-CNN), which reconstructs 3D poses by focusing on the spatio-temporal correlation of the skeletal structure and actively controlling visually sensitive joints. By incorporating a novel similarity loss function, our model can implicitly model the structure and sensitivity of human poses through its architecture and explicitly through direct supervision. Our model not only improves the accuracy of the estimated pose but also increases the perceptual quality as evaluated by PSIM<span><math><msup><mrow></mrow><mo>+</mo></msup></math></span>, verifying the significance of structure and sensitivity awareness. Through rigorous benchmarking, we demonstrate that our metric and estimation model achieve the highest correlation with user scores and perform best in situations where visually sensitive joints are prevalent.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"173 ","pages":"Article 112805"},"PeriodicalIF":7.6,"publicationDate":"2025-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145685042","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-level ensemble feature selection for omics data 组学数据的多级集成特征选择
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-02 DOI: 10.1016/j.patcog.2025.112809
Xiaojian Ding , Xin Wang , Shilin Chen , Kaixiang Wang
High-dimensional, small-sample data poses substantial challenges for conventional feature selection, including instability, limited generalization, and vulnerability to noise. Ensemble feature selection offers a viable remedy, yet its effectiveness hinges on how base selectors are diversified and aggregated. To address this issue, we introduce a multi-level diversity framework that systematically enhances selector heterogeneity. At the sample level, density peak clustering is employed to identify structurally distinct low-dimensional embeddings, thereby strengthening separability. At the kernel level, diversity is quantified through the dispersion of kernel eigenvalues, which captures functional variation and improves representation robustness. By incorporating randomized feature mappings, we generate a pool of candidate projections that are further screened using the proposed metrics to form a diverse and reliable ensemble of feature selectors. We also designed three complementary aggregation strategies, namely EFS-RA (rank aggregation), EFS-SA (score aggregation), and EFS-IA (intersection aggregation), to derive a stable final feature subset. Comprehensive experiments on 15 real-world datasets show that our method consistently surpasses state-of-the-art approaches.
高维、小样本数据给传统的特征选择带来了巨大的挑战,包括不稳定性、有限的泛化和易受噪声的影响。集成特征选择提供了一种可行的补救措施,但其有效性取决于基选择器如何多样化和聚合。为了解决这个问题,我们引入了一个多层次的多样性框架,系统地增强了选择器的异质性。在样本水平上,密度峰聚类用于识别结构上不同的低维嵌入,从而增强可分性。在核水平上,多样性通过核特征值的离散度来量化,从而捕获功能变化并提高表征鲁棒性。通过合并随机特征映射,我们生成了一个候选投影池,这些候选投影使用所提出的度量进一步筛选,以形成多样化和可靠的特征选择器集合。我们还设计了三种互补的聚合策略,即EFS-RA(秩聚合),EFS-SA(分数聚合)和EFS-IA(交叉聚合),以获得稳定的最终特征子集。对15个真实世界数据集的综合实验表明,我们的方法始终优于最先进的方法。
{"title":"Multi-level ensemble feature selection for omics data","authors":"Xiaojian Ding ,&nbsp;Xin Wang ,&nbsp;Shilin Chen ,&nbsp;Kaixiang Wang","doi":"10.1016/j.patcog.2025.112809","DOIUrl":"10.1016/j.patcog.2025.112809","url":null,"abstract":"<div><div>High-dimensional, small-sample data poses substantial challenges for conventional feature selection, including instability, limited generalization, and vulnerability to noise. Ensemble feature selection offers a viable remedy, yet its effectiveness hinges on how base selectors are diversified and aggregated. To address this issue, we introduce a multi-level diversity framework that systematically enhances selector heterogeneity. At the sample level, density peak clustering is employed to identify structurally distinct low-dimensional embeddings, thereby strengthening separability. At the kernel level, diversity is quantified through the dispersion of kernel eigenvalues, which captures functional variation and improves representation robustness. By incorporating randomized feature mappings, we generate a pool of candidate projections that are further screened using the proposed metrics to form a diverse and reliable ensemble of feature selectors. We also designed three complementary aggregation strategies, namely EFS-RA (rank aggregation), EFS-SA (score aggregation), and EFS-IA (intersection aggregation), to derive a stable final feature subset. Comprehensive experiments on 15 real-world datasets show that our method consistently surpasses state-of-the-art approaches.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"173 ","pages":"Article 112809"},"PeriodicalIF":7.6,"publicationDate":"2025-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145685569","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Query expansion with topic-aware in-context learning and vocabulary projection for open-domain dense retrieval 基于主题感知上下文学习的查询扩展和面向开放域密集检索的词汇投影
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-02 DOI: 10.1016/j.patcog.2025.112812
Ronghan Li , Mingze Cui , Benben Wang , Yu Wang , Qiguang Miao
Large language models (LLMs) have recently emerged as pivotal components in open-domain dense retrieval. This study proposes a simple yet effective method to enhance dense retrieval using topic-aware In-Context Learning (ICL) and topic keyword projection. First we leverage LLMs to generate a pseudo-passage based on topic-aware demonstrations obtained from the pre-trained cluster to which the target query belongs. Second, we employ the masked language model (MLM) header in the autoencoder LM to map the query representation to implicitly topic-related tokens as keywords. We combine these two approaches to augment the original query. Extensive experiments on four prevalent open-domain question answering (ODQA) datasets demonstrate that our method achieves an average of 4.26 % improvement in R@20 compared to state-of-the-art query expansion work. Further analysis shows that the relevant demonstrations can provide higher-quality pseudo-passage generation, and the extracted keywords provide an interpretable basis for the effectiveness of dense retrieval. Code and data are available at https://github.com/XD-BDIV-NLP/TDPR.
近年来,大型语言模型(llm)已成为开放域密集检索中的关键组成部分。本研究提出了一种简单而有效的方法,利用主题感知上下文学习(ICL)和主题关键词投影来增强密集检索。首先,我们利用llm基于从目标查询所属的预训练集群获得的主题感知演示生成伪通道。其次,我们在自动编码器LM中使用掩码语言模型(MLM)头将查询表示映射到隐式主题相关令牌作为关键字。我们结合这两种方法来增强原始查询。在四个流行的开放域问答(ODQA)数据集上进行的大量实验表明,与最先进的查询扩展工作相比,我们的方法在R@20上平均提高了4.26%。进一步分析表明,相关演示可以提供更高质量的伪通道生成,提取的关键字为密集检索的有效性提供了可解释的基础。代码和数据可在https://github.com/XD-BDIV-NLP/TDPR上获得。
{"title":"Query expansion with topic-aware in-context learning and vocabulary projection for open-domain dense retrieval","authors":"Ronghan Li ,&nbsp;Mingze Cui ,&nbsp;Benben Wang ,&nbsp;Yu Wang ,&nbsp;Qiguang Miao","doi":"10.1016/j.patcog.2025.112812","DOIUrl":"10.1016/j.patcog.2025.112812","url":null,"abstract":"<div><div>Large language models (LLMs) have recently emerged as pivotal components in open-domain dense retrieval. This study proposes a simple yet effective method to enhance dense retrieval using topic-aware In-Context Learning (ICL) and topic keyword projection. First we leverage LLMs to generate a pseudo-passage based on topic-aware demonstrations obtained from the pre-trained cluster to which the target query belongs. Second, we employ the masked language model (MLM) header in the autoencoder LM to map the query representation to implicitly topic-related tokens as keywords. We combine these two approaches to augment the original query. Extensive experiments on four prevalent open-domain question answering (ODQA) datasets demonstrate that our method achieves an average of 4.26 % improvement in R@20 compared to state-of-the-art query expansion work. Further analysis shows that the relevant demonstrations can provide higher-quality pseudo-passage generation, and the extracted keywords provide an interpretable basis for the effectiveness of dense retrieval. Code and data are available at <span><span>https://github.com/XD-BDIV-NLP/TDPR</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"173 ","pages":"Article 112812"},"PeriodicalIF":7.6,"publicationDate":"2025-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145738128","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep positional encoders for graph classification 用于图分类的深度位置编码器
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-02 DOI: 10.1016/j.patcog.2025.112828
Ahmed Begga, Miguel Ángel Lozano, Francisco Escolano
Structural Pattern Recognition (SPR) includes the study of graphs as encoders of non-sequential and permutation-invariant patterns. In this regard, Graph Neural Networks (GNNs) are paving the way towards “inductive SPR” where classical structural problems such as graph classification can be approached through learnable priors. However, since graphs do not have a canonical order, existing GNNs struggle to learn the structural role of each node in the graph, which becomes key in graph classification. In this paper, we address this problem by making Spectral Graph Theory “inductive”, i.e. by learning the eigenvectors of the graph Laplacian, and then using them as positional encoders (PEs). Our experiments show that we improve significantly the SOTA of GNN-based graph classification.
结构模式识别(SPR)包括对图作为非顺序和排列不变模式编码器的研究。在这方面,图神经网络(gnn)正在为“归纳SPR”铺平道路,其中经典的结构问题,如图分类,可以通过可学习先验来解决。然而,由于图不具有规范顺序,现有gnn很难学习图中每个节点的结构角色,这成为图分类的关键。在本文中,我们通过使谱图理论“归纳”来解决这个问题,即通过学习图拉普拉斯的特征向量,然后将它们用作位置编码器(pe)。我们的实验表明,我们显著提高了基于gnn的图分类的SOTA。
{"title":"Deep positional encoders for graph classification","authors":"Ahmed Begga,&nbsp;Miguel Ángel Lozano,&nbsp;Francisco Escolano","doi":"10.1016/j.patcog.2025.112828","DOIUrl":"10.1016/j.patcog.2025.112828","url":null,"abstract":"<div><div>Structural Pattern Recognition (SPR) includes the study of graphs as encoders of non-sequential and permutation-invariant patterns. In this regard, Graph Neural Networks (GNNs) are paving the way towards “inductive SPR” where classical structural problems such as graph classification can be approached through learnable priors. However, since graphs do not have a canonical order, existing GNNs struggle to learn the structural role of each node in the graph, which becomes key in graph classification. In this paper, we address this problem by making Spectral Graph Theory “inductive”, i.e. by learning the eigenvectors of the graph Laplacian, and then using them as positional encoders (PEs). Our experiments show that we improve significantly the SOTA of GNN-based graph classification.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"173 ","pages":"Article 112828"},"PeriodicalIF":7.6,"publicationDate":"2025-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145738054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mambafusion: State-space model-driven object-scene fusion for multi-modal 3D object detection Mambafusion:用于多模态3D物体检测的状态空间模型驱动的物体-场景融合
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-01 DOI: 10.1016/j.patcog.2025.112820
Tong Ning, Ke Lu, Xirui Jiang, Jian Xue
Existing multi-modal 3D detection struggles with geometric discrepancies between LiDAR/camera data and imbalanced feature alignment in Bird’s Eye View (BEV) space, where sparse foreground objects and scene-context gaps degrade performance. We propose MambaFusion, a novel framework unifying object-level fusion and scene-object interaction for robust 3D perception. Unlike scene-centric BEV fusion methods, MambaFusion introduces two modules: Object-Mamba, aligning 2D and 3D object candidates via grid-sorting and state-space models (SSM) to resolve modality inconsistencies, and Scene-Mamba, integrating image patches with object features and bidirectional SSM to model scene-object topological relationships. This dual-branch approach mitigates foreground-background imbalance and geometric misalignment while capturing holistic context. MambaFusion has achieved promising performance on both nuScenes and Waymo benchmarks.
现有的多模态3D检测与LiDAR/camera数据之间的几何差异以及鸟瞰(BEV)空间中不平衡的特征对齐作斗争,其中稀疏的前景对象和场景上下文间隙会降低性能。我们提出MambaFusion,一个统一对象级融合和场景对象交互的新框架,用于鲁棒的3D感知。与以场景为中心的BEV融合方法不同,MambaFusion引入了两个模块:object - mamba,通过网格排序和状态空间模型(SSM)对齐2D和3D候选对象,以解决模态不一致问题;Scene-Mamba,将图像补丁与对象特征和双向SSM集成,以模拟场景-对象拓扑关系。这种双分支方法在捕捉整体背景的同时减轻了前景与背景的不平衡和几何错位。MambaFusion在nuScenes和Waymo的基准测试中都取得了不错的表现。
{"title":"Mambafusion: State-space model-driven object-scene fusion for multi-modal 3D object detection","authors":"Tong Ning,&nbsp;Ke Lu,&nbsp;Xirui Jiang,&nbsp;Jian Xue","doi":"10.1016/j.patcog.2025.112820","DOIUrl":"10.1016/j.patcog.2025.112820","url":null,"abstract":"<div><div>Existing multi-modal 3D detection struggles with geometric discrepancies between LiDAR/camera data and imbalanced feature alignment in Bird’s Eye View (BEV) space, where sparse foreground objects and scene-context gaps degrade performance. We propose MambaFusion, a novel framework unifying object-level fusion and scene-object interaction for robust 3D perception. Unlike scene-centric BEV fusion methods, MambaFusion introduces two modules: Object-Mamba, aligning 2D and 3D object candidates via grid-sorting and state-space models (SSM) to resolve modality inconsistencies, and Scene-Mamba, integrating image patches with object features and bidirectional SSM to model scene-object topological relationships. This dual-branch approach mitigates foreground-background imbalance and geometric misalignment while capturing holistic context. MambaFusion has achieved promising performance on both nuScenes and Waymo benchmarks.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"173 ","pages":"Article 112820"},"PeriodicalIF":7.6,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145738056","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FPQuant: A deep learning-based scalable framework for fingerprint phenomics quantification in large-scale biometric population studies FPQuant:基于深度学习的可扩展框架,用于大规模生物识别群体研究中的指纹表型组量化
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-01 DOI: 10.1016/j.patcog.2025.112808
Zhiyong Han , Yelin Shi , Zhao Zhang , Mu Li , Haiguo Zhang , Jingze Tan , Wentian Zhen , Tingting Liu , Xueying Wang , Chengyan Wang , Jiucun Wang , Li Jin , Sijia Wang , Manhua Liu , Jinxi Li
Fingerprint morphology, while evolutionary conserved yet individually distinct, emerges as a pivotal biometric identifier in anthropological research and forensic investigation. Current methodologies for precise identification and quantification of complex morphological features—particularly ridge counting and mean ridge-furrow pairs ridge breadth—remain constrained by labor-intensive and monolithic pattern recognition systems. This study presents FPQuant (Fingerprint Phenomics Quantification), a multi-task deep learning framework integrating the most comprehensive fingerprint pattern classification, singularity detection, and quantification of 12 morphometric phenotypes to date. Leveraging NSPT database of 28,867 expert-curated fingerprints, FPQuant achieved state-of-the-art performance with 97.18 % (6-class), 98.62 % (5-class), and 98.67 % (4-class) pattern classification accuracy; 98.63 % precision in topological singularity detection through optimized discrete keypoint localization; and expert-level precision in critical quantitative measurements including ridge counting. Cross-database validation demonstrated extraordinary generalizability with 96.20 % of 5-class accuracy on NIST-4 and 97.75 % of singularity precision on FVC2002 DB1. Notably, FPQuant’s integrated phenotypic capability revealed uncharacterized geographic variation in six morphometric traits, establishing novel fingerprint morphometric biomarkers for anthropological research. This study creates a scalable technical paradigm that bridging fingerprint phenomics with large-scale population study, while providing potential new research avenues across anthropology, forensics and biometric authentication.
指纹形态虽然在进化上是保守的,但在个体上是独特的,在人类学研究和法医调查中成为关键的生物特征识别。目前用于精确识别和量化复杂形态特征的方法,特别是脊计数和平均脊沟对脊宽,仍然受到劳动密集型和单一模式识别系统的限制。本研究提出了FPQuant (Fingerprint Phenomics Quantification),这是一个多任务深度学习框架,集成了迄今为止最全面的指纹模式分类、奇点检测和12种形态表型的量化。利用NSPT数据库28,867个专家策划的指纹,FPQuant实现了97.18%(6类),98.62%(5类)和98.67%(4类)模式分类准确率的最先进性能;优化的离散关键点定位拓扑奇异点检测精度为98.63%;在包括脊计数在内的关键定量测量中具有专家级的精度。跨数据库验证显示出非凡的通用性,在NIST-4上的5类精度为96.20%,在FVC2002 DB1上的奇点精度为97.75%。值得注意的是,FPQuant的综合表型能力揭示了六个形态特征的非特征地理差异,为人类学研究建立了新的指纹形态生物标志物。这项研究创造了一个可扩展的技术范例,将指纹表型学与大规模人口研究联系起来,同时为人类学、法医学和生物识别认证提供了潜在的新研究途径。
{"title":"FPQuant: A deep learning-based scalable framework for fingerprint phenomics quantification in large-scale biometric population studies","authors":"Zhiyong Han ,&nbsp;Yelin Shi ,&nbsp;Zhao Zhang ,&nbsp;Mu Li ,&nbsp;Haiguo Zhang ,&nbsp;Jingze Tan ,&nbsp;Wentian Zhen ,&nbsp;Tingting Liu ,&nbsp;Xueying Wang ,&nbsp;Chengyan Wang ,&nbsp;Jiucun Wang ,&nbsp;Li Jin ,&nbsp;Sijia Wang ,&nbsp;Manhua Liu ,&nbsp;Jinxi Li","doi":"10.1016/j.patcog.2025.112808","DOIUrl":"10.1016/j.patcog.2025.112808","url":null,"abstract":"<div><div>Fingerprint morphology, while evolutionary conserved yet individually distinct, emerges as a pivotal biometric identifier in anthropological research and forensic investigation. Current methodologies for precise identification and quantification of complex morphological features—particularly ridge counting and mean ridge-furrow pairs ridge breadth—remain constrained by labor-intensive and monolithic pattern recognition systems. This study presents FPQuant (Fingerprint Phenomics Quantification), a multi-task deep learning framework integrating the most comprehensive fingerprint pattern classification, singularity detection, and quantification of 12 morphometric phenotypes to date. Leveraging NSPT database of 28,867 expert-curated fingerprints, FPQuant achieved state-of-the-art performance with 97.18 % (6-class), 98.62 % (5-class), and 98.67 % (4-class) pattern classification accuracy; 98.63 % precision in topological singularity detection through optimized discrete keypoint localization; and expert-level precision in critical quantitative measurements including ridge counting. Cross-database validation demonstrated extraordinary generalizability with 96.20 % of 5-class accuracy on NIST-4 and 97.75 % of singularity precision on FVC2002 DB1. Notably, FPQuant’s integrated phenotypic capability revealed uncharacterized geographic variation in six morphometric traits, establishing novel fingerprint morphometric biomarkers for anthropological research. This study creates a scalable technical paradigm that bridging fingerprint phenomics with large-scale population study, while providing potential new research avenues across anthropology, forensics and biometric authentication.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"173 ","pages":"Article 112808"},"PeriodicalIF":7.6,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145738125","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Pattern Recognition
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1