首页 > 最新文献

Pattern Recognition最新文献

英文 中文
Online multi-label classification under noisy and changing label distribution 噪声和标签分布变化下的在线多标签分类
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-09 DOI: 10.1016/j.patcog.2025.112892
Yizhang Zou , Xuegang Hu , Peipei Li , You Wu , Jun Hu
There remains two practical yet challenging issues for the task of online multi-label classification (OMC): 1) Existing OMC methods all have limitations in terms of label quality and fail to deal with the case that noisy labels occur in both relevant and irrelevant labels; 2) the ground-truth label distribution may vary with the time changing, which is hidden in the observed noisy label distribution and difficult to track. Motivated by this, we propose an online multi-label classification algorithm establishing robustness to such Noisy and Changing Label Distribution (NCLD). Firstly, an objective is designed to model the OMC framework of label scoring and thresholding. To ensure that the zero threshold can accurately separate the ground-truth positive and negative labels, the local feature graph is used to reconstruct the label scores jointly with the observed labels, and an unbiased ranking loss is derived and applied to constrain relevant label scores to be higher than irrelevant label scores. Thanks to the derived closed-form solution and the sequential updating rule, a major advantage is that the online model can be efficiently updated and achieve competitive performance with respect to the batch ones. In addition, by detecting the difference between two adjacent chunks with the unbiased label cardinality, we identify the change in the ground-truth label distribution and reset the ranking or all information learned from the past to adapt to the new distribution. With all above techniques and advantages, the proposed method establishes stable and robust online classification performance under NCLD. Finally, empirical experimental results validate the effectiveness of our method in classifying instances under NCLD.
在线多标签分类(OMC)存在两个现实而又具有挑战性的问题:1)现有的OMC方法在标签质量方面都存在局限性,不能处理相关标签和不相关标签中同时出现噪声标签的情况;2)地真值标签分布随时间变化,隐藏在观测到的噪声标签分布中,难以跟踪。基于此,我们提出了一种在线多标签分类算法,该算法对噪声和变化标签分布(NCLD)具有鲁棒性。首先,设计了标签评分和阈值的OMC框架模型。为了保证零阈值能够准确地分离真值正标签和负标签,利用局部特征图与观测到的标签联合重构标签分数,导出无偏排序损失,约束相关标签分数高于不相关标签分数。由于导出的封闭解和顺序更新规则,在线模型可以有效地更新,并且相对于批量模型具有竞争力。此外,通过使用无偏标签基数检测相邻两个块之间的差异,我们识别出ground-truth标签分布的变化,并重置从过去学习到的所有信息的排名以适应新的分布。综合上述技术和优点,该方法在NCLD条件下具有稳定、鲁棒的在线分类性能。最后,通过实验验证了该方法在NCLD下实例分类的有效性。
{"title":"Online multi-label classification under noisy and changing label distribution","authors":"Yizhang Zou ,&nbsp;Xuegang Hu ,&nbsp;Peipei Li ,&nbsp;You Wu ,&nbsp;Jun Hu","doi":"10.1016/j.patcog.2025.112892","DOIUrl":"10.1016/j.patcog.2025.112892","url":null,"abstract":"<div><div>There remains two practical yet challenging issues for the task of online multi-label classification (OMC): 1) Existing OMC methods all have limitations in terms of label quality and fail to deal with the case that noisy labels occur in both relevant and irrelevant labels; 2) the ground-truth label distribution may vary with the time changing, which is hidden in the observed noisy label distribution and difficult to track. Motivated by this, we propose an online multi-label classification algorithm establishing robustness to such Noisy and Changing Label Distribution (NCLD). Firstly, an objective is designed to model the OMC framework of label scoring and thresholding. To ensure that the zero threshold can accurately separate the ground-truth positive and negative labels, the local feature graph is used to reconstruct the label scores jointly with the observed labels, and an unbiased <em>ranking loss</em> is derived and applied to constrain relevant label scores to be higher than irrelevant label scores. Thanks to the derived closed-form solution and the sequential updating rule, a major advantage is that the online model can be efficiently updated and achieve competitive performance with respect to the batch ones. In addition, by detecting the difference between two adjacent chunks with the unbiased label cardinality, we identify the change in the ground-truth label distribution and reset the ranking or all information learned from the past to adapt to the new distribution. With all above techniques and advantages, the proposed method establishes stable and robust online classification performance under NCLD. Finally, empirical experimental results validate the effectiveness of our method in classifying instances under NCLD.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"173 ","pages":"Article 112892"},"PeriodicalIF":7.6,"publicationDate":"2025-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145790759","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Push the limit of scene text recognition using character and text length guided text super-resolution 利用字符和文本长度引导文本超分辨率,推动场景文本识别的极限
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-09 DOI: 10.1016/j.patcog.2025.112869
Jiangtao Nie , Boxiong Wu , Wenyu Peng , Wei Wei , Lei Zhang , Chen Ding , Yanning Zhang
Although scene text recognition has achieved remarkable progress in recent years, its performance remains limited when dealing with low-resolution (LR) scene text. To mitigate this issue, some recent approaches have adopted super-resolution (SR) as a preprocessing step. Nevertheless, these approaches tend to treat SR and recognition as independent tasks, often overlooking their inherent complementarity, which may restrict the full potential of the system and hinder further performance improvements. To overcome this limitation, we propose a unified framework that performs SR and scene text recognition simultaneously, enabling the two tasks to mutually reinforce each other. In addition, we seek to more effectively exploit the prior information inherently present in scene text images to guide the SR network toward better reconstruction performance. Specifically, we introduce an end-to-end architecture that integrates SR and recognition modules within a unified framework and adopts an iterative strategy to facilitate mutual enhancement between the two tasks. Inspired by human visual perception, we further incorporate two perceptual priors, Character Features and Text Length, collectively referred to as the CF-TL priors. These priors leverage semantic and structural cues to enhance the reconstruction of text images and improve recognition accuracy. Extensive experiments conducted on benchmark datasets demonstrate that our method significantly outperforms existing approaches in terms of recognition accuracy. These results highlight the effectiveness of our framework and its potential to push the limit of more robust and accurate scene text recognition systems for low-resolution inputs.
尽管近年来场景文本识别取得了显著的进展,但在处理低分辨率场景文本时,其性能仍然有限。为了缓解这个问题,最近的一些方法采用超分辨率(SR)作为预处理步骤。然而,这些方法倾向于将SR和识别视为独立的任务,往往忽略了它们内在的互补性,这可能会限制系统的全部潜力并阻碍进一步的性能改进。为了克服这一限制,我们提出了一个统一的框架,同时执行SR和场景文本识别,使两个任务相互增强。此外,我们寻求更有效地利用场景文本图像中固有的先验信息,以指导SR网络获得更好的重建性能。具体来说,我们引入了一个端到端架构,将SR和识别模块集成在一个统一的框架内,并采用迭代策略来促进两个任务之间的相互增强。受人类视觉感知的启发,我们进一步将字符特征和文本长度两种感知先验结合起来,统称为CF-TL先验。这些先验利用语义和结构线索来增强文本图像的重建,提高识别精度。在基准数据集上进行的大量实验表明,我们的方法在识别精度方面明显优于现有方法。这些结果突出了我们的框架的有效性,以及它对低分辨率输入的更鲁棒和更准确的场景文本识别系统的潜力。
{"title":"Push the limit of scene text recognition using character and text length guided text super-resolution","authors":"Jiangtao Nie ,&nbsp;Boxiong Wu ,&nbsp;Wenyu Peng ,&nbsp;Wei Wei ,&nbsp;Lei Zhang ,&nbsp;Chen Ding ,&nbsp;Yanning Zhang","doi":"10.1016/j.patcog.2025.112869","DOIUrl":"10.1016/j.patcog.2025.112869","url":null,"abstract":"<div><div>Although scene text recognition has achieved remarkable progress in recent years, its performance remains limited when dealing with low-resolution (LR) scene text. To mitigate this issue, some recent approaches have adopted super-resolution (SR) as a preprocessing step. Nevertheless, these approaches tend to treat SR and recognition as independent tasks, often overlooking their inherent complementarity, which may restrict the full potential of the system and hinder further performance improvements. To overcome this limitation, we propose a unified framework that performs SR and scene text recognition simultaneously, enabling the two tasks to mutually reinforce each other. In addition, we seek to more effectively exploit the prior information inherently present in scene text images to guide the SR network toward better reconstruction performance. Specifically, we introduce an end-to-end architecture that integrates SR and recognition modules within a unified framework and adopts an iterative strategy to facilitate mutual enhancement between the two tasks. Inspired by human visual perception, we further incorporate two perceptual priors, Character Features and Text Length, collectively referred to as the CF-TL priors. These priors leverage semantic and structural cues to enhance the reconstruction of text images and improve recognition accuracy. Extensive experiments conducted on benchmark datasets demonstrate that our method significantly outperforms existing approaches in terms of recognition accuracy. These results highlight the effectiveness of our framework and its potential to push the limit of more robust and accurate scene text recognition systems for low-resolution inputs.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"173 ","pages":"Article 112869"},"PeriodicalIF":7.6,"publicationDate":"2025-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145790674","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Linking model intervention to causal interpretation in model explanation 将模型解释中的模型干预与因果解释联系起来
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-09 DOI: 10.1016/j.patcog.2025.112814
Debo Cheng , Ziqi Xu , Jiuyong Li , Lin Liu , Kui Yu , Thuc Duy Le , Jixue Liu
Most existing explanation methods mimic the idea of intervention in a model, where the intervention effect is used for explanation. The model intervention effect (MIE) of a feature on the outcome is quantified by the difference of a model prediction when the feature value is changed from the current value to the baseline value. By contrast, a causal intervention uses the do-operator to modify the data-generating mechanism; therefore, the MIE is, by default, an associational, model-dependent quantity. In this paper, we study the conditions when the MIE has a causal interpretation, i.e., when it indicates whether a feature is a direct cause of the outcome. This work links the MIE to the causal interpretation of a model. Such a linkage is important since it indicates whether a machine learning model is trustworthy to domain experts. The conditions also reveal the limitations of using the MIE for causal interpretation in an environment with unobserved features. Experiments on semi-synthetic and image datasets have been conducted to validate the theorems and demonstrate the potential of the MIE for causal model interpretation.
大多数现有的解释方法在模型中模仿干预的思想,其中使用干预效果进行解释。特征对结果的模型干预效应(model intervention effect, MIE)是通过特征值从当前值到基线值变化时模型预测的差异来量化的。相比之下,因果干预使用do算子来修改数据生成机制;因此,缺省情况下,MIE是一个关联的、依赖于模型的量。在本文中,我们研究了MIE具有因果解释的条件,即当它表明一个特征是否是结果的直接原因时。这项工作将MIE与模型的因果解释联系起来。这种联系很重要,因为它表明机器学习模型是否值得领域专家信任。这些条件还揭示了在具有未观察到的特征的环境中使用MIE进行因果解释的局限性。在半合成和图像数据集上进行的实验验证了这些定理,并证明了MIE在因果模型解释方面的潜力。
{"title":"Linking model intervention to causal interpretation in model explanation","authors":"Debo Cheng ,&nbsp;Ziqi Xu ,&nbsp;Jiuyong Li ,&nbsp;Lin Liu ,&nbsp;Kui Yu ,&nbsp;Thuc Duy Le ,&nbsp;Jixue Liu","doi":"10.1016/j.patcog.2025.112814","DOIUrl":"10.1016/j.patcog.2025.112814","url":null,"abstract":"<div><div>Most existing explanation methods mimic the idea of intervention in a model, where the intervention effect is used for explanation. The model intervention effect (MIE) of a feature on the outcome is quantified by the difference of a model prediction when the feature value is changed from the current value to the baseline value. By contrast, a causal intervention uses the do-operator to modify the data-generating mechanism; therefore, the MIE is, by default, an associational, model-dependent quantity. In this paper, we study the conditions when the MIE has a causal interpretation, i.e., when it indicates whether a feature is a direct cause of the outcome. This work links the MIE to the causal interpretation of a model. Such a linkage is important since it indicates whether a machine learning model is trustworthy to domain experts. The conditions also reveal the limitations of using the MIE for causal interpretation in an environment with unobserved features. Experiments on semi-synthetic and image datasets have been conducted to validate the theorems and demonstrate the potential of the MIE for causal model interpretation.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"173 ","pages":"Article 112814"},"PeriodicalIF":7.6,"publicationDate":"2025-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145790679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep learning-based detection of autism spectrum disorder and emotion recognition in children 基于深度学习的儿童自闭症谱系障碍和情绪识别检测
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-09 DOI: 10.1016/j.patcog.2025.112906
T. Akalya, D. Ramyachitra (Associate Professor), M. Shabarna, C. Legha
Autism Spectrum Disorder (ASD) is a multifaceted neurological development disability that is characterized by deficits in social communication and emotion detection. Early and accurate identification of ASD is important; however, clinicians and researchers have often depended upon lengthy, in-clinic assessments.
This research work proposes two deep learning models. This study proposes two deep learning models: a Hybrid resNet50V2+InceptionV3 model for ASD prediction from facial images and an enhanced MobileNet for recognizing six emotions in children with ASD. Data sets were collected from two public datasets. Preprocessing was performed (normalization and augmentation) and a hybrid ResNet50V2+InceptionV3 model and enhanced MobileNet were trained and tested. The hybrid model produced 95 % accuracy for predicting ASD, and the enhanced MobileNet achieved 73.3 % accuracy for predicting emotions; both accuracy rates were significantly improved as compared to baseline models. Overall, both models demonstrated that hybrid and enhanced model architectures improve the predictive accuracy and generalization of children with ASD from facial images. The models can also be used as non-invasive methodologies for early detection of ASD and emotion classification in clinical or educational contexts. Future research will include multimodal inputs and methods in explainable AI for further clinical implication.
自闭症谱系障碍(ASD)是一种多方面的神经发育障碍,其特征是社会沟通和情绪检测的缺陷。早期准确识别自闭症谱系障碍很重要;然而,临床医生和研究人员往往依赖于冗长的临床评估。本研究提出了两种深度学习模型。本研究提出了两种深度学习模型:用于从面部图像预测ASD的Hybrid resNet50V2+InceptionV3模型和用于识别ASD儿童六种情绪的增强MobileNet模型。数据集收集自两个公共数据集。进行预处理(归一化和增强),并训练和测试ResNet50V2+InceptionV3混合模型和增强的MobileNet。混合模型预测ASD的准确率为95%,增强的MobileNet预测情绪的准确率为73.3%;与基线模型相比,这两种准确率都有显著提高。总的来说,这两个模型都表明,混合和增强的模型架构提高了从面部图像预测自闭症儿童的准确性和泛化。这些模型也可以作为临床或教育背景下ASD早期检测和情绪分类的非侵入性方法。未来的研究将包括可解释人工智能的多模式输入和方法,以进一步提高临床意义。
{"title":"Deep learning-based detection of autism spectrum disorder and emotion recognition in children","authors":"T. Akalya,&nbsp;D. Ramyachitra (Associate Professor),&nbsp;M. Shabarna,&nbsp;C. Legha","doi":"10.1016/j.patcog.2025.112906","DOIUrl":"10.1016/j.patcog.2025.112906","url":null,"abstract":"<div><div>Autism Spectrum Disorder (ASD) is a multifaceted neurological development disability that is characterized by deficits in social communication and emotion detection. Early and accurate identification of ASD is important; however, clinicians and researchers have often depended upon lengthy, in-clinic assessments.</div><div>This research work proposes two deep learning models. This study proposes two deep learning models: a Hybrid resNet50V2+InceptionV3 model for ASD prediction from facial images and an enhanced MobileNet for recognizing six emotions in children with ASD. Data sets were collected from two public datasets. Preprocessing was performed (normalization and augmentation) and a hybrid ResNet50V2+InceptionV3 model and enhanced MobileNet were trained and tested. The hybrid model produced 95 % accuracy for predicting ASD, and the enhanced MobileNet achieved 73.3 % accuracy for predicting emotions; both accuracy rates were significantly improved as compared to baseline models. Overall, both models demonstrated that hybrid and enhanced model architectures improve the predictive accuracy and generalization of children with ASD from facial images. The models can also be used as non-invasive methodologies for early detection of ASD and emotion classification in clinical or educational contexts. Future research will include multimodal inputs and methods in explainable AI for further clinical implication.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"173 ","pages":"Article 112906"},"PeriodicalIF":7.6,"publicationDate":"2025-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145790835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fuzzy cluster-aware contrastive clustering for time series 时间序列模糊聚类感知对比聚类
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-09 DOI: 10.1016/j.patcog.2025.112899
Congyu Wang, Mingjing Du, Xiang Jiang, Yongquan Dong
The rapid growth of unlabeled time series data, driven by the Internet of Things (IoT), poses significant challenges in uncovering underlying patterns. Traditional unsupervised clustering methods often fail to capture the complex nature of time series data. Recent deep learning-based clustering approaches, while effective, struggle with insufficient representation learning and the integration of clustering objectives. To address these issues, we propose a fuzzy cluster-aware contrastive clustering framework (FCACC) that jointly optimizes representation learning and clustering. Our approach introduces a novel three-view data augmentation strategy to enhance feature extraction by leveraging various characteristics of time series data. Additionally, we propose a cluster-aware hard negative sample generation mechanism that dynamically constructs high-quality negative samples using clustering structure information, thereby improving the model’s discriminative ability. By leveraging fuzzy clustering, FCACC dynamically generates cluster structures to guide the contrastive learning process, resulting in more accurate clustering. Extensive experiments on 40 benchmark datasets show that FCACC outperforms the selected baseline methods (nine in total), providing an effective solution for unsupervised time series learning. The source code is publicly available at https://github.com/Du-Team/FCACC.
在物联网(IoT)的推动下,未标记时间序列数据的快速增长对揭示潜在模式提出了重大挑战。传统的无监督聚类方法往往无法捕捉时间序列数据的复杂性。最近基于深度学习的聚类方法虽然有效,但存在表征学习不足和聚类目标集成的问题。为了解决这些问题,我们提出了一个模糊聚类感知对比聚类框架(FCACC),该框架共同优化了表示学习和聚类。我们的方法引入了一种新的三视图数据增强策略,通过利用时间序列数据的各种特征来增强特征提取。此外,我们提出了一种聚类感知的硬负样本生成机制,利用聚类结构信息动态构建高质量的负样本,从而提高了模型的判别能力。FCACC利用模糊聚类,动态生成聚类结构来指导对比学习过程,从而获得更准确的聚类结果。在40个基准数据集上的大量实验表明,FCACC优于所选的基线方法(总共9个),为无监督时间序列学习提供了有效的解决方案。源代码可在https://github.com/Du-Team/FCACC上公开获得。
{"title":"Fuzzy cluster-aware contrastive clustering for time series","authors":"Congyu Wang,&nbsp;Mingjing Du,&nbsp;Xiang Jiang,&nbsp;Yongquan Dong","doi":"10.1016/j.patcog.2025.112899","DOIUrl":"10.1016/j.patcog.2025.112899","url":null,"abstract":"<div><div>The rapid growth of unlabeled time series data, driven by the Internet of Things (IoT), poses significant challenges in uncovering underlying patterns. Traditional unsupervised clustering methods often fail to capture the complex nature of time series data. Recent deep learning-based clustering approaches, while effective, struggle with insufficient representation learning and the integration of clustering objectives. To address these issues, we propose a fuzzy cluster-aware contrastive clustering framework (FCACC) that jointly optimizes representation learning and clustering. Our approach introduces a novel three-view data augmentation strategy to enhance feature extraction by leveraging various characteristics of time series data. Additionally, we propose a cluster-aware hard negative sample generation mechanism that dynamically constructs high-quality negative samples using clustering structure information, thereby improving the model’s discriminative ability. By leveraging fuzzy clustering, FCACC dynamically generates cluster structures to guide the contrastive learning process, resulting in more accurate clustering. Extensive experiments on 40 benchmark datasets show that FCACC outperforms the selected baseline methods (nine in total), providing an effective solution for unsupervised time series learning. The source code is publicly available at <span><span>https://github.com/Du-Team/FCACC</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"173 ","pages":"Article 112899"},"PeriodicalIF":7.6,"publicationDate":"2025-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145790678","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-model co-training for medical image segmentation with limited annotation 有限标注医学图像分割的多模型协同训练
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-09 DOI: 10.1016/j.patcog.2025.112885
Yuanzhi Cheng , Xinghu Zhou , Guanghan Wang
Obtaining dense voxel-wise labels for medical image segmentation is prohibitively expensive and time-consuming, often resulting in a limited amount of labeled data. To mitigate this challenge, we propose a semi-supervised segmentation framework that leverages unlabeled data through structured mining and consistency-driven learning to enhance segmentation performance. The proposed framework consists of three synergistic modules: (1) an Adaptive Multi-Scale Consistency Pyramid(AMCP) Module that promotes semantic consistency across scales to capture anatomical features at varying resolutions; (2) a Distribution-Calibrated Feature Alignment (DCFA) Module that aligns feature distributions in a group-aware manner to reduce domain and batch-level inconsistency between labeled and unlabeled data; and (3) an Unlabeled Data-Mining Learning(UDL) Module that dynamically selects high-confidence unlabeled samples via uncertainty-aware mining strategies. These components are jointly optimized to enhance representation learning and supervision reliability. Extensive experiments on four public medical image segmentation datasets–BTCV, LA, Pancreas-CT, and ACDC–demonstrate that our framework consistently outperforms existing semi-supervised approaches, delivering superior accuracy and generalization in limited annotation regimes. Specifically, in terms of Dice, with 10 % annotations, our method yields 91.09 % on the LA dataset, 90.72 % on the ACDC dataset, and 80.17 % on the Pancreas-CT dataset. On the BTCV dataset, it further obtains 76.39 % with 30 % annotations, which consistently demonstrates the effectiveness of our approach.
为医学图像分割获得密集的体素标记是非常昂贵和耗时的,通常会导致标记数据的数量有限。为了缓解这一挑战,我们提出了一种半监督分割框架,该框架通过结构化挖掘和一致性驱动学习来利用未标记的数据来提高分割性能。提出的框架由三个协同模块组成:(1)自适应多尺度一致性金字塔(AMCP)模块,促进跨尺度的语义一致性,以捕获不同分辨率的解剖特征;(2)分布校准特征对齐(DCFA)模块,该模块以组感知的方式对齐特征分布,以减少标记数据与未标记数据之间的域级和批级不一致;(3)无标签数据挖掘学习(UDL)模块,通过不确定性感知挖掘策略动态选择高置信度的无标签样本。这些组件被联合优化,以提高表征学习和监督可靠性。在四个公共医学图像分割数据集(btcv, LA, pancreatic - ct和acdc)上进行的大量实验表明,我们的框架始终优于现有的半监督方法,在有限的注释制度下提供了卓越的准确性和泛化性。具体来说,就Dice而言,使用10%的注释,我们的方法在LA数据集上的收益率为91.09%,在ACDC数据集上的收益率为90.72%,在pancreatic - ct数据集上的收益率为80.17%。在BTCV数据集上,使用30%的注释进一步获得76.39%的准确率,这一致证明了我们方法的有效性。
{"title":"Multi-model co-training for medical image segmentation with limited annotation","authors":"Yuanzhi Cheng ,&nbsp;Xinghu Zhou ,&nbsp;Guanghan Wang","doi":"10.1016/j.patcog.2025.112885","DOIUrl":"10.1016/j.patcog.2025.112885","url":null,"abstract":"<div><div>Obtaining dense voxel-wise labels for medical image segmentation is prohibitively expensive and time-consuming, often resulting in a limited amount of labeled data. To mitigate this challenge, we propose a semi-supervised segmentation framework that leverages unlabeled data through structured mining and consistency-driven learning to enhance segmentation performance. The proposed framework consists of three synergistic modules: (1) an Adaptive Multi-Scale Consistency Pyramid(AMCP) Module that promotes semantic consistency across scales to capture anatomical features at varying resolutions; (2) a Distribution-Calibrated Feature Alignment (DCFA) Module that aligns feature distributions in a group-aware manner to reduce domain and batch-level inconsistency between labeled and unlabeled data; and (3) an Unlabeled Data-Mining Learning(UDL) Module that dynamically selects high-confidence unlabeled samples via uncertainty-aware mining strategies. These components are jointly optimized to enhance representation learning and supervision reliability. Extensive experiments on four public medical image segmentation datasets–BTCV, LA, Pancreas-CT, and ACDC–demonstrate that our framework consistently outperforms existing semi-supervised approaches, delivering superior accuracy and generalization in limited annotation regimes. Specifically, in terms of Dice, with 10 % annotations, our method yields 91.09 % on the LA dataset, 90.72 % on the ACDC dataset, and 80.17 % on the Pancreas-CT dataset. On the BTCV dataset, it further obtains 76.39 % with 30 % annotations, which consistently demonstrates the effectiveness of our approach.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"173 ","pages":"Article 112885"},"PeriodicalIF":7.6,"publicationDate":"2025-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145790760","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Omnidirectional image quality assessment using frequency-domain information 基于频域信息的全方位图像质量评估
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-08 DOI: 10.1016/j.patcog.2025.112882
Lixiong Liu , Ruibo Cheng , Qingbing Sang , Qiuping Jiang
Most current omnidirectional image quality assessment (OIQA) models focus on spatial feature representation but rarely consider frequency-domain information. To amend this, we propose a frequency-aware omnidirectional image quality assessment (FOIQA) method that adaptively captures different frequency-domain components. Specifically, we first adaptively decompose an omnidirectional image into high- and low- frequency components and feed the decomposed components to the designed network branches for feature extraction. To enhance the representation of both frequency components, we design different information enhancement modules to enhance high- and low- frequency components, respectively. Then, we consider the mutual influence between local and global perceptions and design a dual-frequency feature fusion module to fuse the enhanced features by simulating the interactions between two frequency components. The fused features are finally used for quality prediction. Experimental results on three public databases show the superiority of our proposed model relative to all compared image quality assessment (IQA) and OIQA models.
目前的全向图像质量评估(OIQA)模型主要关注空间特征表示,很少考虑频域信息。为了修正这一点,我们提出了一种频率感知的全方位图像质量评估(FOIQA)方法,该方法自适应捕获不同的频域分量。具体而言,我们首先自适应地将全向图像分解为高频和低频分量,并将分解后的分量馈送到设计的网络分支进行特征提取。为了增强两种频率分量的表示,我们设计了不同的信息增强模块,分别增强高频和低频分量。然后,考虑局部和全局感知之间的相互影响,设计双频特征融合模块,通过模拟两个频率分量之间的相互作用来融合增强的特征。最后利用融合后的特征进行质量预测。在三个公共数据库上的实验结果表明,我们提出的模型相对于所有比较图像质量评估(IQA)和OIQA模型具有优越性。
{"title":"Omnidirectional image quality assessment using frequency-domain information","authors":"Lixiong Liu ,&nbsp;Ruibo Cheng ,&nbsp;Qingbing Sang ,&nbsp;Qiuping Jiang","doi":"10.1016/j.patcog.2025.112882","DOIUrl":"10.1016/j.patcog.2025.112882","url":null,"abstract":"<div><div>Most current omnidirectional image quality assessment (OIQA) models focus on spatial feature representation but rarely consider frequency-domain information. To amend this, we propose a frequency-aware omnidirectional image quality assessment (FOIQA) method that adaptively captures different frequency-domain components. Specifically, we first adaptively decompose an omnidirectional image into high- and low- frequency components and feed the decomposed components to the designed network branches for feature extraction. To enhance the representation of both frequency components, we design different information enhancement modules to enhance high- and low- frequency components, respectively. Then, we consider the mutual influence between local and global perceptions and design a dual-frequency feature fusion module to fuse the enhanced features by simulating the interactions between two frequency components. The fused features are finally used for quality prediction. Experimental results on three public databases show the superiority of our proposed model relative to all compared image quality assessment (IQA) and OIQA models.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"173 ","pages":"Article 112882"},"PeriodicalIF":7.6,"publicationDate":"2025-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145738046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A multi-spatiotemporal joint next-POI travel sequence recommendation method based on federated learning 基于联邦学习的多时空联合next-POI旅行序列推荐方法
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-08 DOI: 10.1016/j.patcog.2025.112867
Chunhua Ju , Huajian Zhou , Fuguang Bao , Chonghuan Xu
Accurately predicting the next point of interest (POI) that users will visit is essential for delivering personalized travel services, thereby significantly enhancing the user experience. During travel, check-in data is frequently sparse both temporally and spatially. Moreover, the requirement to gather extensive user privacy information for personalized services poses a significant challenge in accurately recommending POIs while effectively protecting user privacy. This paper proposes a multi-spatiotemporal joint next-POI travel sequence recommendation method based on federated learning. This method optimizes loss in clustered federated learning by jointly learning user spatiotemporal check-in data at various levels of granularity. It employs a positive and negative segmented perturbation mechanism within multi-task joint learning, effectively addressing the asynchronous issues inherent in federated learning. Extensive experiments on two popular real-world datasets demonstrate that our proposed method significantly outperforms other state-of-the-art baselines in terms of accuracy in next-POI travel sequence recommendations while maintaining privacy protection.
准确预测用户将访问的下一个兴趣点(POI)对于提供个性化旅行服务至关重要,从而显著提高用户体验。在旅行期间,登记数据在时间和空间上经常是稀疏的。此外,个性化服务需要收集大量用户隐私信息,这对准确推荐poi并有效保护用户隐私提出了重大挑战。提出了一种基于联邦学习的多时空联合next-POI旅行序列推荐方法。该方法通过联合学习不同粒度级别的用户时空签入数据来优化聚类联邦学习中的损失。它在多任务联合学习中采用了正、负分段扰动机制,有效地解决了联邦学习中固有的异步问题。在两个流行的真实世界数据集上进行的大量实验表明,我们提出的方法在保持隐私保护的同时,在next-POI旅行序列推荐的准确性方面显着优于其他最先进的基线。
{"title":"A multi-spatiotemporal joint next-POI travel sequence recommendation method based on federated learning","authors":"Chunhua Ju ,&nbsp;Huajian Zhou ,&nbsp;Fuguang Bao ,&nbsp;Chonghuan Xu","doi":"10.1016/j.patcog.2025.112867","DOIUrl":"10.1016/j.patcog.2025.112867","url":null,"abstract":"<div><div>Accurately predicting the next point of interest (POI) that users will visit is essential for delivering personalized travel services, thereby significantly enhancing the user experience. During travel, check-in data is frequently sparse both temporally and spatially. Moreover, the requirement to gather extensive user privacy information for personalized services poses a significant challenge in accurately recommending POIs while effectively protecting user privacy. This paper proposes a multi-spatiotemporal joint next-POI travel sequence recommendation method based on federated learning. This method optimizes loss in clustered federated learning by jointly learning user spatiotemporal check-in data at various levels of granularity. It employs a positive and negative segmented perturbation mechanism within multi-task joint learning, effectively addressing the asynchronous issues inherent in federated learning. Extensive experiments on two popular real-world datasets demonstrate that our proposed method significantly outperforms other state-of-the-art baselines in terms of accuracy in next-POI travel sequence recommendations while maintaining privacy protection.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"173 ","pages":"Article 112867"},"PeriodicalIF":7.6,"publicationDate":"2025-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145738066","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GELD: A unified neural model for efficiently solving traveling salesman problems across different scales GELD:一个统一的神经模型,用于有效地解决不同尺度的旅行推销员问题
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-08 DOI: 10.1016/j.patcog.2025.112865
Yubin Xiao , Di Wang , Rui Cao , Xuan Wu , Boyang Li , You Zhou
The Traveling Salesman Problem (TSP) is a well-known combinatorial optimization problem with broad real-world applications. Recent advancements in neural network-based TSP solvers have shown promising results. Nonetheless, these models often struggle to efficiently solve both small- and large-scale TSPs using the same set of pre-trained model parameters, limiting their practical utility. To address this issue, we introduce a novel neural TSP solver named GELD, built upon our proposed broad global assessment and refined local selection framework. Specifically, GELD integrates a lightweight Global-view Encoder (GE) with a heavyweight Local-view Decoder (LD) to enrich embedding representation while accelerating the decision-making process. Moreover, GE incorporates a novel low-complexity attention mechanism, allowing GELD to achieve low inference latency and scalability to larger-scale TSPs. Additionally, we propose a two-stage training strategy that utilizes training instances of different sizes to bolster GELD’s generalization ability. Extensive experiments conducted on both synthetic and real-world datasets demonstrate that GELD outperforms eight state-of-the-art models considering both solution quality and inference speed. Furthermore, GELD can be employed as a post-processing method to significantly elevate the quality of the solutions derived by existing neural TSP solvers via spending affordable additional computing time. Notably, GELD demonstrates the ability to solve TSPs with up to 744,710 nodes, making it the first model capable of solving such large-size TSPs without relying on divide-and-conquer strategies to the best of our knowledge.
旅行商问题(TSP)是一个著名的组合优化问题,在现实世界中有着广泛的应用。基于神经网络的TSP求解器的最新进展显示出了令人鼓舞的结果。尽管如此,这些模型通常难以有效地使用相同的预训练模型参数集来解决小型和大型tsp,这限制了它们的实际效用。为了解决这个问题,我们引入了一种名为GELD的新型神经TSP求解器,它建立在我们提出的广泛的全局评估和改进的局部选择框架之上。具体来说,GELD集成了轻量级全局视图编码器(GE)和重量级局部视图解码器(LD),以丰富嵌入表示,同时加快决策过程。此外,GE集成了一种新颖的低复杂度注意机制,使GELD能够实现低推理延迟和更大规模tsp的可扩展性。此外,我们提出了一个两阶段的训练策略,利用不同大小的训练实例来增强GELD的泛化能力。在合成和真实数据集上进行的大量实验表明,考虑到解决方案质量和推理速度,GELD优于8个最先进的模型。此外,GELD可以作为一种后处理方法,通过花费可承受的额外计算时间,显著提高现有神经TSP求解器导出的解的质量。值得注意的是,GELD展示了解决多达744,710个节点的tsp的能力,使其成为第一个能够解决如此大规模tsp的模型,而不依赖于我们所知的分而治之策略。
{"title":"GELD: A unified neural model for efficiently solving traveling salesman problems across different scales","authors":"Yubin Xiao ,&nbsp;Di Wang ,&nbsp;Rui Cao ,&nbsp;Xuan Wu ,&nbsp;Boyang Li ,&nbsp;You Zhou","doi":"10.1016/j.patcog.2025.112865","DOIUrl":"10.1016/j.patcog.2025.112865","url":null,"abstract":"<div><div>The Traveling Salesman Problem (TSP) is a well-known combinatorial optimization problem with broad real-world applications. Recent advancements in neural network-based TSP solvers have shown promising results. Nonetheless, these models often struggle to efficiently solve both small- and large-scale TSPs using the same set of pre-trained model parameters, limiting their practical utility. To address this issue, we introduce a novel neural TSP solver named GELD, built upon our proposed broad global assessment and refined local selection framework. Specifically, GELD integrates a lightweight Global-view Encoder (GE) with a heavyweight Local-view Decoder (LD) to enrich embedding representation while accelerating the decision-making process. Moreover, GE incorporates a novel low-complexity attention mechanism, allowing GELD to achieve low inference latency and scalability to larger-scale TSPs. Additionally, we propose a two-stage training strategy that utilizes training instances of different sizes to bolster GELD’s generalization ability. Extensive experiments conducted on both synthetic and real-world datasets demonstrate that GELD outperforms eight state-of-the-art models considering both solution quality and inference speed. Furthermore, GELD can be employed as a post-processing method to significantly elevate the quality of the solutions derived by existing neural TSP solvers via spending affordable additional computing time. Notably, GELD demonstrates the ability to solve TSPs with up to 744,710 nodes, making it the first model capable of solving such large-size TSPs without relying on divide-and-conquer strategies to the best of our knowledge.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"173 ","pages":"Article 112865"},"PeriodicalIF":7.6,"publicationDate":"2025-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145790682","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Attention-driven refinement network for continuity-preserving airway segmentation in class-imbalanced CT 类不平衡CT中保持连续气道分割的注意驱动细化网络
IF 7.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-12-08 DOI: 10.1016/j.patcog.2025.112838
Guobin Zhang , Kelong Chen , Yucan Liu , Shuai Li , Zhenzhong Liu
Tubular airway segmentation is a prerequisite for bronchoscopic intervention in treating pulmonary diseases. Training convolutional neural networks (CNNs) for airway segmentation remains a clinical challenge due to the local discontinuities and distal small airway leakages caused by low resolutions and severe data imbalances. To address these issues, we propose an attention-driven refinement network, based on the degree of feature contribution, to improve the performance of fine-grained airway segmentation. A pointwise feature recalibration (PWFR) module is first designed to implement a differential feature treatment strategy by emphasizing competitive features and continuously suppressing redundant features, highlighting the prominence of airways in the learning task. Furthermore, a novel attention-driven knowledge distillation (AttdKD) module is developed to fully integrate the spatial and channel knowledge at various stages of the network, which strengthens the focus on distal small airways under conditions of class imbalance and mitigates the local discontinuity problem. The segmentation visualization results indicate that our refinement network effectively improves the thin airway recognition rate and improves the overall continuity of the airways under the guidance of the PWFR and AttdKD modules. The branches detected (BD) and tree length detected (TD) achieved scores of 93.96 %/81.7 % and 92.71 %/79.9 % on the ATM’22 and EXACT’09 datasets, respectively, and obtained scores of 92.72 %/92.44 % and 92.16 %/91.97 % on the abnormal case test sets of COVID-19 and Fibrosis, respectively. Extensive experiments demonstrate that our proposed method exhibits excellent sensitivity to distal small airways and achieves notable overall segmentation performance compared to the state-of-the-art (SOTA) baselines.
小管气道分割是支气管镜介入治疗肺部疾病的先决条件。由于低分辨率和严重的数据不平衡导致局部不连续性和远端小气道泄漏,训练卷积神经网络(cnn)用于气道分割仍然是一个临床挑战。为了解决这些问题,我们提出了一个基于特征贡献程度的注意力驱动的细化网络,以提高细粒度气道分割的性能。首先设计了一个点特征重校准(PWFR)模块,通过强调竞争特征和持续抑制冗余特征来实现差分特征处理策略,突出气道在学习任务中的突出地位。此外,开发了一种新颖的注意力驱动知识蒸馏(AttdKD)模块,充分整合网络各阶段的空间和通道知识,增强了对类不平衡条件下远端小气道的关注,缓解了局部不连续问题。分割可视化结果表明,在PWFR和AttdKD模块的指导下,我们的细化网络有效地提高了薄气道识别率,提高了气道的整体连续性。在ATM ' 22和EXACT ' 09数据集上,检测到的分支数(BD)和树长(TD)分别达到93.96% / 81.7%和92.71% / 79.9%,在COVID-19和Fibrosis异常病例测试集上分别达到92.72% / 92.44%和92.16% / 91.97%。大量的实验表明,与最先进的(SOTA)基线相比,我们提出的方法对远端小气道具有出色的敏感性,并且实现了显着的整体分割性能。
{"title":"Attention-driven refinement network for continuity-preserving airway segmentation in class-imbalanced CT","authors":"Guobin Zhang ,&nbsp;Kelong Chen ,&nbsp;Yucan Liu ,&nbsp;Shuai Li ,&nbsp;Zhenzhong Liu","doi":"10.1016/j.patcog.2025.112838","DOIUrl":"10.1016/j.patcog.2025.112838","url":null,"abstract":"<div><div>Tubular airway segmentation is a prerequisite for bronchoscopic intervention in treating pulmonary diseases. Training convolutional neural networks (CNNs) for airway segmentation remains a clinical challenge due to the local discontinuities and distal small airway leakages caused by low resolutions and severe data imbalances. To address these issues, we propose an attention-driven refinement network, based on the degree of feature contribution, to improve the performance of fine-grained airway segmentation. A pointwise feature recalibration (PWFR) module is first designed to implement a differential feature treatment strategy by emphasizing competitive features and continuously suppressing redundant features, highlighting the prominence of airways in the learning task. Furthermore, a novel attention-driven knowledge distillation (AttdKD) module is developed to fully integrate the spatial and channel knowledge at various stages of the network, which strengthens the focus on distal small airways under conditions of class imbalance and mitigates the local discontinuity problem. The segmentation visualization results indicate that our refinement network effectively improves the thin airway recognition rate and improves the overall continuity of the airways under the guidance of the PWFR and AttdKD modules. The branches detected (<em>BD</em>) and tree length detected (<em>TD</em>) achieved scores of 93.96 %/81.7 % and 92.71 %/79.9 % on the ATM’22 and EXACT’09 datasets, respectively, and obtained scores of 92.72 %/92.44 % and 92.16 %/91.97 % on the abnormal case test sets of COVID-19 and Fibrosis, respectively. Extensive experiments demonstrate that our proposed method exhibits excellent sensitivity to distal small airways and achieves notable overall segmentation performance compared to the state-of-the-art (SOTA) baselines.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"173 ","pages":"Article 112838"},"PeriodicalIF":7.6,"publicationDate":"2025-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145790839","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Pattern Recognition
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1