首页 > 最新文献

Pattern Recognition Letters最新文献

英文 中文
SAM-guided prompt learning for Multiple Sclerosis lesion segmentation sam引导下的多发性硬化症病灶分割提示学习
IF 3.3 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-11-17 DOI: 10.1016/j.patrec.2025.11.018
Federica Proietto Salanitri , Giovanni Bellitto , Salvatore Calcagno , Ulas Bagci , Concetto Spampinato , Manuela Pennisi
Accurate segmentation of Multiple Sclerosis (MS) lesions remains a critical challenge in medical image analysis due to their small size, irregular shape, and sparse distribution. Despite recent progress in vision foundation models — such as SAM and its medical variant MedSAM — these models have not yet been explored in the context of MS lesion segmentation. Moreover, their reliance on manually crafted prompts and high inference-time computational cost limits their applicability in clinical workflows, especially in resource-constrained environments. In this work, we introduce a novel training-time framework for effective and efficient MS lesion segmentation. Our method leverages SAM solely during training to guide a prompt learner that automatically discovers task-specific embeddings. At inference, SAM is replaced by a lightweight convolutional aggregator that maps the learned embeddings directly into segmentation masks—enabling fully automated, low-cost deployment. We show that our approach significantly outperforms existing specialized methods on the public MSLesSeg dataset, establishing new performance benchmarks in a domain where foundation models had not previously been applied. To assess generalizability, we also evaluate our method on pancreas and prostate segmentation tasks, where it achieves competitive accuracy while requiring an order of magnitude fewer parameters and computational resources compared to SAM-based pipelines. By eliminating the need for foundation models at inference time, our framework enables efficient segmentation without sacrificing accuracy. This design bridges the gap between large-scale pretraining and real-world clinical deployment, offering a scalable and practical solution for MS lesion segmentation and beyond. Code is available at https://github.com/perceivelab/MS-SAM-LESS.
由于多发性硬化症(MS)病变体积小、形状不规则、分布稀疏,其准确分割一直是医学图像分析中的一个关键挑战。尽管最近在视觉基础模型(如SAM及其医学变体MedSAM)方面取得了进展,但这些模型尚未在MS病变分割的背景下进行探索。此外,它们对手工制作提示和高推断时间计算成本的依赖限制了它们在临床工作流程中的适用性,特别是在资源受限的环境中。在这项工作中,我们引入了一种新的训练时间框架,用于有效和高效的MS病变分割。我们的方法仅在训练期间利用SAM来指导快速学习者自动发现特定于任务的嵌入。在推理中,SAM被一个轻量级的卷积聚合器取代,该聚合器将学习到的嵌入直接映射到分割掩码中,从而实现全自动、低成本的部署。我们表明,我们的方法在公共MSLesSeg数据集上显著优于现有的专门方法,在以前没有应用基础模型的领域建立了新的性能基准。为了评估泛化性,我们还在胰腺和前列腺分割任务中评估了我们的方法,与基于sam的管道相比,它在需要更少的参数和计算资源的同时达到了相当的准确性。通过在推理时消除对基础模型的需求,我们的框架可以在不牺牲准确性的情况下实现有效的分割。该设计弥合了大规模预训练和实际临床部署之间的差距,为MS病变分割等提供了可扩展和实用的解决方案。代码可从https://github.com/perceivelab/MS-SAM-LESS获得。
{"title":"SAM-guided prompt learning for Multiple Sclerosis lesion segmentation","authors":"Federica Proietto Salanitri ,&nbsp;Giovanni Bellitto ,&nbsp;Salvatore Calcagno ,&nbsp;Ulas Bagci ,&nbsp;Concetto Spampinato ,&nbsp;Manuela Pennisi","doi":"10.1016/j.patrec.2025.11.018","DOIUrl":"10.1016/j.patrec.2025.11.018","url":null,"abstract":"<div><div>Accurate segmentation of Multiple Sclerosis (MS) lesions remains a critical challenge in medical image analysis due to their small size, irregular shape, and sparse distribution. Despite recent progress in vision foundation models — such as SAM and its medical variant MedSAM — these models have not yet been explored in the context of MS lesion segmentation. Moreover, their reliance on manually crafted prompts and high inference-time computational cost limits their applicability in clinical workflows, especially in resource-constrained environments. In this work, we introduce a novel training-time framework for effective and efficient MS lesion segmentation. Our method leverages SAM solely during training to guide a prompt learner that automatically discovers task-specific embeddings. At inference, SAM is replaced by a lightweight convolutional aggregator that maps the learned embeddings directly into segmentation masks—enabling fully automated, low-cost deployment. We show that our approach significantly outperforms existing specialized methods on the public MSLesSeg dataset, establishing new performance benchmarks in a domain where foundation models had not previously been applied. To assess generalizability, we also evaluate our method on pancreas and prostate segmentation tasks, where it achieves competitive accuracy while requiring an order of magnitude fewer parameters and computational resources compared to SAM-based pipelines. By eliminating the need for foundation models at inference time, our framework enables efficient segmentation without sacrificing accuracy. This design bridges the gap between large-scale pretraining and real-world clinical deployment, offering a scalable and practical solution for MS lesion segmentation and beyond. Code is available at <span><span>https://github.com/perceivelab/MS-SAM-LESS</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"199 ","pages":"Pages 205-211"},"PeriodicalIF":3.3,"publicationDate":"2025-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145579804","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Additive decomposition of one-dimensional signals using Transformers 基于变压器的一维信号加性分解
IF 3.3 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-11-17 DOI: 10.1016/j.patrec.2025.11.002
Samuele Salti , Andrea Pinto , Alessandro Lanza , Serena Morigi
One-dimensional signal decomposition is a well-established and widely used technique across various scientific fields. It serves as a highly valuable pre-processing step for data analysis. While traditional decomposition techniques often rely on mathematical models, recent research suggests that applying the latest deep learning models to this very ill-posed inverse problem represents an exciting, unexplored area with promising potential. This work presents a novel method for the additive decomposition of one-dimensional signals. We leverage the Transformer architecture to decompose signals into their constituent components: piecewise constant, smooth (trend), highly-oscillatory, and noise components. Our model, trained on synthetic data, achieves excellent accuracy in modeling and decomposing input signals from the same distribution, as demonstrated by the experimental results.
一维信号分解是一种成熟且广泛应用于各个科学领域的技术。它是数据分析中非常有价值的预处理步骤。虽然传统的分解技术通常依赖于数学模型,但最近的研究表明,将最新的深度学习模型应用于这个非常不适定的逆问题代表了一个令人兴奋的、尚未开发的领域,具有很大的潜力。本文提出了一种一维信号加性分解的新方法。我们利用Transformer架构将信号分解为它们的组成组件:分段常量、平滑(趋势)、高振荡和噪声组件。实验结果表明,我们的模型在模拟和分解来自相同分布的输入信号方面取得了优异的精度。
{"title":"Additive decomposition of one-dimensional signals using Transformers","authors":"Samuele Salti ,&nbsp;Andrea Pinto ,&nbsp;Alessandro Lanza ,&nbsp;Serena Morigi","doi":"10.1016/j.patrec.2025.11.002","DOIUrl":"10.1016/j.patrec.2025.11.002","url":null,"abstract":"<div><div>One-dimensional signal decomposition is a well-established and widely used technique across various scientific fields. It serves as a highly valuable pre-processing step for data analysis. While traditional decomposition techniques often rely on mathematical models, recent research suggests that applying the latest deep learning models to this very ill-posed inverse problem represents an exciting, unexplored area with promising potential. This work presents a novel method for the additive decomposition of one-dimensional signals. We leverage the Transformer architecture to decompose signals into their constituent components: piecewise constant, smooth (trend), highly-oscillatory, and noise components. Our model, trained on synthetic data, achieves excellent accuracy in modeling and decomposing input signals from the same distribution, as demonstrated by the experimental results.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"199 ","pages":"Pages 239-245"},"PeriodicalIF":3.3,"publicationDate":"2025-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145617788","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SAMIRO: Spatial Attention Mutual Information Regularization with a pre-trained model as Oracle for lane detection SAMIRO:基于预训练模型的空间注意互信息正则化,用于车道检测
IF 3.3 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-11-17 DOI: 10.1016/j.patrec.2025.10.013
Hyunjong Lee , Jangho Lee , Jaekoo Lee
Lane detection is an important topic in the future mobility solutions. Real-world environmental challenges such as background clutter, varying illumination, and occlusions pose significant obstacles to effective lane detection, particularly when relying on data-driven approaches that require substantial effort and cost for data collection and annotation. To address these issues, lane detection methods must leverage contextual and global information from surrounding lanes and objects. In this paper, we propose a Spatial Attention Mutual Information Regularization with a pre-trained model as an Oracle, called SAMIRO. SAMIRO enhances lane detection performance by transferring knowledge from a pre-trained model while preserving domain-agnostic spatial information. Leveraging SAMIRO’s plug-and-play characteristic, we integrate it into various state-of-the-art lane detection approaches and conduct extensive experiments on major benchmarks such as CULane, Tusimple, and LLAMAS. The results demonstrate that SAMIRO consistently improves performance across different models and datasets. The code will be made available upon publication.
车道检测是未来交通解决方案中的一个重要课题。现实世界的环境挑战,如背景杂乱、光照变化和遮挡,对有效的车道检测构成了重大障碍,特别是当依赖于数据驱动的方法时,需要大量的努力和成本来收集和注释数据。为了解决这些问题,车道检测方法必须利用来自周围车道和物体的上下文和全局信息。在本文中,我们提出了一种空间注意互信息正则化方法,将预训练模型作为Oracle,称为SAMIRO。SAMIRO通过从预训练模型转移知识来增强车道检测性能,同时保留与领域无关的空间信息。利用SAMIRO的即插即用特性,我们将其集成到各种最先进的车道检测方法中,并在CULane, Tusimple和LLAMAS等主要基准上进行了广泛的实验。结果表明,SAMIRO在不同的模型和数据集上一致地提高了性能。该准则将在出版后提供。
{"title":"SAMIRO: Spatial Attention Mutual Information Regularization with a pre-trained model as Oracle for lane detection","authors":"Hyunjong Lee ,&nbsp;Jangho Lee ,&nbsp;Jaekoo Lee","doi":"10.1016/j.patrec.2025.10.013","DOIUrl":"10.1016/j.patrec.2025.10.013","url":null,"abstract":"<div><div>Lane detection is an important topic in the future mobility solutions. Real-world environmental challenges such as background clutter, varying illumination, and occlusions pose significant obstacles to effective lane detection, particularly when relying on data-driven approaches that require substantial effort and cost for data collection and annotation. To address these issues, lane detection methods must leverage contextual and global information from surrounding lanes and objects. In this paper, we propose a <em>Spatial Attention Mutual Information Regularization with a pre-trained model as an Oracle</em>, called <em>SAMIRO</em>. SAMIRO enhances lane detection performance by transferring knowledge from a pre-trained model while preserving domain-agnostic spatial information. Leveraging SAMIRO’s plug-and-play characteristic, we integrate it into various state-of-the-art lane detection approaches and conduct extensive experiments on major benchmarks such as CULane, Tusimple, and LLAMAS. The results demonstrate that SAMIRO consistently improves performance across different models and datasets. The code will be made available upon publication.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"199 ","pages":"Pages 198-204"},"PeriodicalIF":3.3,"publicationDate":"2025-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145579803","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Tadmo: A tabular distance measure with move operations 带移动操作的表格式距离测量
IF 3.3 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-11-15 DOI: 10.1016/j.patrec.2025.11.009
Dirko Coetsee , Steve Kroon , Ralf Kistner , Adem Kikaj , McElory Hoffmann , Luc De Raedt
Tabular data is ubiquitous in pattern recognition, yet accurately measuring differences between tables remains challenging. Conventional methods rely on cell substitutions and row/column insertions and deletions, often overestimating the difference when cells are simply repositioned. We propose a distance metric that considers move operations, capturing structural changes more faithfully. Although exact computation is NP-complete, a greedy approach computes an effective approximation in practice. Experimental results on real-world datasets demonstrate that our approach yields a more compact and intuitive measure of table dissimilarity, enhancing applications such as clustering, table extraction evaluation, and version history recovery.
表格数据在模式识别中无处不在,但准确测量表之间的差异仍然具有挑战性。传统的方法依赖于细胞替换和行/列插入和删除,当细胞只是重新定位时,往往高估了差异。我们提出了一个考虑移动操作的距离度量,更忠实地捕捉结构变化。虽然精确计算是np完全的,但贪婪方法在实际中计算出一个有效的近似。在真实数据集上的实验结果表明,我们的方法产生了更紧凑和直观的表不相似性度量,增强了诸如聚类、表提取评估和版本历史恢复等应用程序。
{"title":"Tadmo: A tabular distance measure with move operations","authors":"Dirko Coetsee ,&nbsp;Steve Kroon ,&nbsp;Ralf Kistner ,&nbsp;Adem Kikaj ,&nbsp;McElory Hoffmann ,&nbsp;Luc De Raedt","doi":"10.1016/j.patrec.2025.11.009","DOIUrl":"10.1016/j.patrec.2025.11.009","url":null,"abstract":"<div><div>Tabular data is ubiquitous in pattern recognition, yet accurately measuring differences between tables remains challenging. Conventional methods rely on cell substitutions and row/column insertions and deletions, often overestimating the difference when cells are simply repositioned. We propose a distance metric that considers move operations, capturing structural changes more faithfully. Although exact computation is NP-complete, a greedy approach computes an effective approximation in practice. Experimental results on real-world datasets demonstrate that our approach yields a more compact and intuitive measure of table dissimilarity, enhancing applications such as clustering, table extraction evaluation, and version history recovery.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"199 ","pages":"Pages 212-218"},"PeriodicalIF":3.3,"publicationDate":"2025-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145579724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep learning and multi-modal MRI for the segmentation of sub-acute and chronic stroke lesions 深度学习和多模态MRI对亚急性和慢性脑卒中病变的分割
IF 3.3 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-11-14 DOI: 10.1016/j.patrec.2025.11.017
Alessandro Di Matteo , Youwan Mahé , Stéphanie Leplaideur , Isabelle Bonan , Elise Bannier , Francesca Galassi
Stroke is a leading cause of morbidity and mortality worldwide. Accurate segmentation of post-stroke lesions on MRI is crucial for assessing brain damage and informing rehabilitation. Manual segmentation, however, is time-consuming and prone to error, motivating the development of automated approaches. This study investigates how deep learning with multimodal MRI can improve automated lesion segmentation in sub-acute and chronic stroke. A single-modality baseline was trained on the public ATLAS v2.0 dataset (655 T1-w scans) using the nnU-Net v2 framework and evaluated on an independent clinical cohort (45 patients with paired T1-w and FLAIR MRI). On this internal dataset, we conducted a systematic ablation comparing (i) direct transfer of the ATLAS baseline, (ii) fine-tuning using T1-w only, and (iii) fusion of T1-w and FLAIR inputs through early, mid, and late fusion strategies, each tested with metric averaging and ensembling.
The ATLAS baseline model achieved a mean Dice score of 0.64 and a lesion-wise F1 score of 0.67. On the clinical dataset, ensembling improved performance (Dice 0.70 vs. 0.68; F1 0.79 vs. 0.73), while fine-tuning on T1-w data further increased accuracy (Dice 0.72; F1 0.78). The best overall results were obtained with a T1+FLAIR late-fusion ensemble (Dice 0.75; F1 0.80; Average Surface Distance (ASD) 2.94 mm), with statistically significant improvements, especially for small and medium lesions.
These results show that fine-tuning and multimodal fusion — particularly late fusion — improve generalization for post-stroke lesion segmentation, supporting robust, reproducible quantification in clinical settings.
中风是全世界发病率和死亡率的主要原因。脑卒中后MRI病变的准确分割对于评估脑损伤和告知康复至关重要。然而,手工分割既耗时又容易出错,这促使了自动化方法的发展。本研究探讨了多模态MRI的深度学习如何改善亚急性和慢性中风的自动病灶分割。使用nnU-Net v2框架在公共ATLAS v2.0数据集(655个T1-w扫描)上训练单模态基线,并在独立临床队列(45例配对T1-w和FLAIR MRI患者)中进行评估。在这个内部数据集上,我们进行了系统的消融比较(i) ATLAS基线的直接转移,(ii)仅使用T1-w进行微调,以及(iii)通过早期、中期和后期融合策略融合T1-w和FLAIR输入,每种策略都使用度量平均和集合进行测试。ATLAS基线模型的平均Dice评分为0.64,逐病变F1评分为0.67。在临床数据集上,集成提高了性能(Dice 0.70 vs. 0.68; F1 0.79 vs. 0.73),而在T1-w数据上的微调进一步提高了准确性(Dice 0.72; F1 0.78)。T1+FLAIR晚期融合整体效果最好(Dice 0.75; F1 0.80;平均表面距离(ASD) 2.94 mm),具有统计学上显著的改善,特别是对于中小型病变。这些结果表明,微调和多模态融合-特别是后期融合-提高了脑卒中后病变分割的泛化,支持临床环境中稳健、可重复的量化。
{"title":"Deep learning and multi-modal MRI for the segmentation of sub-acute and chronic stroke lesions","authors":"Alessandro Di Matteo ,&nbsp;Youwan Mahé ,&nbsp;Stéphanie Leplaideur ,&nbsp;Isabelle Bonan ,&nbsp;Elise Bannier ,&nbsp;Francesca Galassi","doi":"10.1016/j.patrec.2025.11.017","DOIUrl":"10.1016/j.patrec.2025.11.017","url":null,"abstract":"<div><div>Stroke is a leading cause of morbidity and mortality worldwide. Accurate segmentation of post-stroke lesions on MRI is crucial for assessing brain damage and informing rehabilitation. Manual segmentation, however, is time-consuming and prone to error, motivating the development of automated approaches. This study investigates how deep learning with multimodal MRI can improve automated lesion segmentation in sub-acute and chronic stroke. A single-modality baseline was trained on the public ATLAS v2.0 dataset (655 T1-w scans) using the nnU-Net v2 framework and evaluated on an independent clinical cohort (45 patients with paired T1-w and FLAIR MRI). On this internal dataset, we conducted a systematic ablation comparing (i) direct transfer of the ATLAS baseline, (ii) fine-tuning using T1-w only, and (iii) fusion of T1-w and FLAIR inputs through early, mid, and late fusion strategies, each tested with metric averaging and ensembling.</div><div>The ATLAS baseline model achieved a mean Dice score of 0.64 and a lesion-wise F1 score of 0.67. On the clinical dataset, ensembling improved performance (Dice 0.70 vs. 0.68; F1 0.79 vs. 0.73), while fine-tuning on T1-w data further increased accuracy (Dice 0.72; F1 0.78). The best overall results were obtained with a T1+FLAIR late-fusion ensemble (Dice 0.75; F1 0.80; Average Surface Distance (ASD) 2.94 mm), with statistically significant improvements, especially for small and medium lesions.</div><div>These results show that fine-tuning and multimodal fusion — particularly late fusion — improve generalization for post-stroke lesion segmentation, supporting robust, reproducible quantification in clinical settings.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"199 ","pages":"Pages 225-231"},"PeriodicalIF":3.3,"publicationDate":"2025-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145617713","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Regional patch-based MRI brain age modeling with an interpretable cognitive reserve proxy 基于区域斑块的MRI脑年龄模型与可解释的认知储备代理
IF 3.3 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-11-14 DOI: 10.1016/j.patrec.2025.11.027
Samuel Maddox , Lemuel Puglisi , Fatemeh Darabifard , Alzheimer’s Disease Neuroimaging Initiative , Australian Imaging Biomarkers and Lifestyle flagship study of aging , Saber Sami , Daniele Ravi
Accurate brain age prediction from MRI is a promising biomarker for brain health and neurodegenerative disease risk, but current deep learning models often lack anatomical specificity and clinical insight. We present a regional patch-based ensemble framework that uses 3D Convolutional Neural Networks (CNNs) trained on bilateral patches from ten subcortical structures, enhancing anatomical sensitivity. Ensemble predictions are combined with cognitive assessments to derive a cognitively informed proxy for cognitive reserve (CR-Proxy), quantifying resilience to age-related brain changes. We train our framework on a large, multi-cohort dataset of healthy controls and test it on independent samples that include individuals with Alzheimer’s disease and mild cognitive impairment. The results demonstrate that our method achieves robust brain age prediction and provides a practical, interpretable CR-Proxy capable of distinguishing diagnostic groups and identifying individuals with high or low cognitive reserve. This pipeline offers a scalable, clinically accessible tool for early risk assessment and personalized brain health monitoring.
从MRI中准确预测脑年龄是一种很有前途的脑健康和神经退行性疾病风险的生物标志物,但目前的深度学习模型往往缺乏解剖学特异性和临床洞察力。我们提出了一个基于区域斑块的集成框架,该框架使用3D卷积神经网络(cnn)对来自10个皮层下结构的双侧斑块进行训练,提高了解剖灵敏度。集合预测与认知评估相结合,得出认知储备的认知知情代理(CR-Proxy),量化与年龄相关的大脑变化的弹性。我们在健康对照的大型多队列数据集上训练我们的框架,并在包括患有阿尔茨海默病和轻度认知障碍的个体在内的独立样本上进行测试。结果表明,我们的方法实现了稳健的脑年龄预测,并提供了一个实用的、可解释的CR-Proxy,能够区分诊断组和识别具有高或低认知储备的个体。该管道为早期风险评估和个性化大脑健康监测提供了可扩展的、临床可访问的工具。
{"title":"Regional patch-based MRI brain age modeling with an interpretable cognitive reserve proxy","authors":"Samuel Maddox ,&nbsp;Lemuel Puglisi ,&nbsp;Fatemeh Darabifard ,&nbsp;Alzheimer’s Disease Neuroimaging Initiative ,&nbsp;Australian Imaging Biomarkers and Lifestyle flagship study of aging ,&nbsp;Saber Sami ,&nbsp;Daniele Ravi","doi":"10.1016/j.patrec.2025.11.027","DOIUrl":"10.1016/j.patrec.2025.11.027","url":null,"abstract":"<div><div>Accurate brain age prediction from MRI is a promising biomarker for brain health and neurodegenerative disease risk, but current deep learning models often lack anatomical specificity and clinical insight. We present a regional patch-based ensemble framework that uses 3D Convolutional Neural Networks (CNNs) trained on bilateral patches from ten subcortical structures, enhancing anatomical sensitivity. Ensemble predictions are combined with cognitive assessments to derive a cognitively informed proxy for cognitive reserve (CR-Proxy), quantifying resilience to age-related brain changes. We train our framework on a large, multi-cohort dataset of healthy controls and test it on independent samples that include individuals with Alzheimer’s disease and mild cognitive impairment. The results demonstrate that our method achieves robust brain age prediction and provides a practical, interpretable CR-Proxy capable of distinguishing diagnostic groups and identifying individuals with high or low cognitive reserve. This pipeline offers a scalable, clinically accessible tool for early risk assessment and personalized brain health monitoring.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"199 ","pages":"Pages 219-224"},"PeriodicalIF":3.3,"publicationDate":"2025-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145579805","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Channel scaling: An efficient feature representation to enhance the generalization of few-shot learning 通道缩放:一种有效的特征表示,以增强少镜头学习的泛化
IF 3.3 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-11-14 DOI: 10.1016/j.patrec.2025.11.010
Hongjie Chen , Pei Lu , Xiaoyong Liu , Yuan Ling
In recent years, deep learning has achieved significant breakthroughs in image classification. However, many practical scenarios are severely constrained by limited labeled data. To address this issue, few-shot learning has arisen as a solution, whereby features are extracted from limited training data and generalized to new categories. Existing approaches primarily rely on features extracted from backbone networks that predominantly focus on local regions while neglecting global contextual relationships, thereby limiting the model’s ability to distinguish fine-grained features. This paper introduces a lightweight Channel Scaling Module (CSM) to address this limitation. The proposed CSM operates by unfolding feature maps, applying channel scaling, and performing 3D convolution operations to enrich feature representations. This process simultaneously compresses the number of feature channels while expanding feature dimensions, enhancing the expressiveness of the representations with minimal computational overhead, and improving sensitivity to both local and global features. A series of comprehensive experiments were conducted, using multiple datasets covering standard few-shot classification, fine-grained few-shot classification, and cross-domain few-shot classification. The empirical results indicate that the proposed method consistently attains performance that is either comparable to or superior to that of current state-of-the-art approaches under the majority of scenarios.
近年来,深度学习在图像分类方面取得了重大突破。然而,许多实际场景受到有限标记数据的严重限制。为了解决这个问题,少量学习作为一种解决方案出现了,即从有限的训练数据中提取特征并将其推广到新的类别。现有的方法主要依赖于从骨干网络中提取的特征,这些特征主要关注局部区域,而忽略了全局上下文关系,从而限制了模型区分细粒度特征的能力。本文介绍了一个轻量级通道缩放模块(CSM)来解决这个限制。所提出的CSM通过展开特征映射、应用通道缩放和执行3D卷积操作来丰富特征表示。该过程在扩展特征维度的同时压缩了特征通道的数量,以最小的计算开销增强了表征的表达性,并提高了对局部和全局特征的敏感性。采用标准少弹分类、细粒度少弹分类、跨域少弹分类等多数据集进行了一系列综合实验。实证结果表明,在大多数情况下,所提出的方法始终能够达到与当前最先进的方法相当或优于当前最先进方法的性能。
{"title":"Channel scaling: An efficient feature representation to enhance the generalization of few-shot learning","authors":"Hongjie Chen ,&nbsp;Pei Lu ,&nbsp;Xiaoyong Liu ,&nbsp;Yuan Ling","doi":"10.1016/j.patrec.2025.11.010","DOIUrl":"10.1016/j.patrec.2025.11.010","url":null,"abstract":"<div><div>In recent years, deep learning has achieved significant breakthroughs in image classification. However, many practical scenarios are severely constrained by limited labeled data. To address this issue, few-shot learning has arisen as a solution, whereby features are extracted from limited training data and generalized to new categories. Existing approaches primarily rely on features extracted from backbone networks that predominantly focus on local regions while neglecting global contextual relationships, thereby limiting the model’s ability to distinguish fine-grained features. This paper introduces a lightweight Channel Scaling Module (CSM) to address this limitation. The proposed CSM operates by unfolding feature maps, applying channel scaling, and performing 3D convolution operations to enrich feature representations. This process simultaneously compresses the number of feature channels while expanding feature dimensions, enhancing the expressiveness of the representations with minimal computational overhead, and improving sensitivity to both local and global features. A series of comprehensive experiments were conducted, using multiple datasets covering standard few-shot classification, fine-grained few-shot classification, and cross-domain few-shot classification. The empirical results indicate that the proposed method consistently attains performance that is either comparable to or superior to that of current state-of-the-art approaches under the majority of scenarios.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"199 ","pages":"Pages 163-169"},"PeriodicalIF":3.3,"publicationDate":"2025-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145579723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Anatomical foundation models for brain MRIs 脑mri解剖基础模型
IF 3.3 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-11-14 DOI: 10.1016/j.patrec.2025.11.028
Carlo Alberto Barbano , Matteo Brunello , Benoit Dufumier , Marco Grangetto , Alzheimer’s Disease Neuroimaging Initiative
Deep Learning (DL) in neuroimaging has become increasingly relevant for detecting neurological conditions and neurodegenerative disorders. One of the predominant biomarkers in neuroimaging is represented by brain age, which has been shown to be a good indicator for different conditions, such as Alzheimer’s Disease. Using brain age for weakly supervised pre-training of DL models in transfer learning settings has also recently shown promising results, especially when dealing with data scarcity of different conditions. On the other hand, anatomical information of brain MRIs (e.g. cortical thickness) can provide important information for learning good representations that can be transferred to many downstream tasks. In this work, we propose AnatCL, an anatomical foundation model for structural brain MRIs that (i.) leverages anatomical information in a weakly contrastive learning approach, and (ii.) achieves state-of-the-art performances across many different downstream tasks. To validate our approach we consider 12 different downstream tasks for the diagnosis of different conditions such as Alzheimer’s Disease, autism spectrum disorder, and schizophrenia. Furthermore, we also target the prediction of 10 different clinical assessment scores using structural MRI data. Our findings show that incorporating anatomical information during pre-training leads to more robust and generalizable representations. Pre-trained models can be found at: https://github.com/EIDOSLAB/AnatCL.
神经成像中的深度学习(DL)在检测神经系统疾病和神经退行性疾病方面变得越来越重要。脑年龄是神经影像学的主要生物标志物之一,它已被证明是不同疾病的良好指标,如阿尔茨海默病。在迁移学习设置中使用脑年龄对DL模型进行弱监督预训练最近也显示出有希望的结果,特别是在处理不同条件下的数据稀缺性时。另一方面,脑mri的解剖信息(如皮质厚度)可以为学习良好的表征提供重要信息,这些表征可以转移到许多下游任务中。在这项工作中,我们提出了AnatCL,这是一种结构脑mri的解剖基础模型,它(i)在弱对比学习方法中利用解剖信息,(ii)在许多不同的下游任务中实现最先进的性能。为了验证我们的方法,我们考虑了12种不同的下游任务,用于诊断不同的疾病,如阿尔茨海默病、自闭症谱系障碍和精神分裂症。此外,我们还针对使用结构MRI数据预测10种不同的临床评估评分。我们的研究结果表明,在预训练中加入解剖信息会导致更稳健和可概括的表征。预训练模型可以在https://github.com/EIDOSLAB/AnatCL上找到。
{"title":"Anatomical foundation models for brain MRIs","authors":"Carlo Alberto Barbano ,&nbsp;Matteo Brunello ,&nbsp;Benoit Dufumier ,&nbsp;Marco Grangetto ,&nbsp;Alzheimer’s Disease Neuroimaging Initiative","doi":"10.1016/j.patrec.2025.11.028","DOIUrl":"10.1016/j.patrec.2025.11.028","url":null,"abstract":"<div><div>Deep Learning (DL) in neuroimaging has become increasingly relevant for detecting neurological conditions and neurodegenerative disorders. One of the predominant biomarkers in neuroimaging is represented by brain age, which has been shown to be a good indicator for different conditions, such as Alzheimer’s Disease. Using brain age for weakly supervised pre-training of DL models in transfer learning settings has also recently shown promising results, especially when dealing with data scarcity of different conditions. On the other hand, anatomical information of brain MRIs (e.g. cortical thickness) can provide important information for learning good representations that can be transferred to many downstream tasks. In this work, we propose AnatCL, an anatomical foundation model for structural brain MRIs that (i.) leverages anatomical information in a weakly contrastive learning approach, and (ii.) achieves state-of-the-art performances across many different downstream tasks. To validate our approach we consider 12 different downstream tasks for the diagnosis of different conditions such as Alzheimer’s Disease, autism spectrum disorder, and schizophrenia. Furthermore, we also target the prediction of 10 different clinical assessment scores using structural MRI data. Our findings show that incorporating anatomical information during pre-training leads to more robust and generalizable representations. Pre-trained models can be found at: <span><span>https://github.com/EIDOSLAB/AnatCL</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"199 ","pages":"Pages 178-184"},"PeriodicalIF":3.3,"publicationDate":"2025-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145579801","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Discriminative response pruning for robust and efficient deep networks under label noise 标签噪声下鲁棒高效深度网络的判别响应剪枝
IF 3.3 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-11-13 DOI: 10.1016/j.patrec.2025.11.025
Shuwen Jin, Junzhu Mao, Zeren Sun, Yazhou Yao
Pruning is widely recognized as a promising approach for reducing the computational and storage demands of deep neural networks, facilitating lightweight model deployment on resource-limited devices. However, most existing pruning techniques assume the availability of accurate training labels, overlooking the prevalence of noisy labels in real-world settings. Deep networks have strong memorization capability, making them prone to overfitting noisy labels and thereby sensitive to the removal of network parameters. As a result, existing methods often encounter limitations when directly applied to the task of pruning models trained with noisy labels. To this end, we propose Discriminative Response Pruning (DRP) to robustly prune models trained with noisy labels. Specifically, DRP begins by identifying clean and noisy samples and reorganizing them into class-specific subsets. Then, it estimates the importance of model parameters by evaluating their responses to each subset, rewarding parameters exhibiting strong responses to clean data and penalizing those overfitting to noisy data. A class-wise reweighted aggregation strategy is then employed to compute the final importance score, which guides the pruning decisions. Extensive experiments across various models and noise conditions are conducted to demonstrate the efficacy and robustness of our method.
修剪被广泛认为是一种很有前途的方法,可以减少深度神经网络的计算和存储需求,促进在资源有限的设备上部署轻量级模型。然而,大多数现有的修剪技术假设了准确训练标签的可用性,忽略了现实环境中噪声标签的普遍存在。深度网络具有较强的记忆能力,容易出现噪声标签过拟合,因此对网络参数的去除较为敏感。因此,现有的方法在直接应用于使用噪声标签训练的模型剪枝任务时往往会遇到局限性。为此,我们提出了判别响应剪枝(Discriminative Response Pruning, DRP)来对带有噪声标签训练的模型进行鲁棒剪枝。具体来说,DRP首先识别干净和有噪声的样本,并将它们重新组织成特定类别的子集。然后,它通过评估模型参数对每个子集的响应来估计模型参数的重要性,奖励对干净数据表现出强烈响应的参数,惩罚那些对噪声数据过拟合的参数。然后采用类重加权聚合策略来计算最终的重要性分数,从而指导修剪决策。在各种模型和噪声条件下进行了广泛的实验,以证明我们的方法的有效性和鲁棒性。
{"title":"Discriminative response pruning for robust and efficient deep networks under label noise","authors":"Shuwen Jin,&nbsp;Junzhu Mao,&nbsp;Zeren Sun,&nbsp;Yazhou Yao","doi":"10.1016/j.patrec.2025.11.025","DOIUrl":"10.1016/j.patrec.2025.11.025","url":null,"abstract":"<div><div>Pruning is widely recognized as a promising approach for reducing the computational and storage demands of deep neural networks, facilitating lightweight model deployment on resource-limited devices. However, most existing pruning techniques assume the availability of accurate training labels, overlooking the prevalence of noisy labels in real-world settings. Deep networks have strong memorization capability, making them prone to overfitting noisy labels and thereby sensitive to the removal of network parameters. As a result, existing methods often encounter limitations when directly applied to the task of pruning models trained with noisy labels. To this end, we propose Discriminative Response Pruning (DRP) to robustly prune models trained with noisy labels. Specifically, DRP begins by identifying clean and noisy samples and reorganizing them into class-specific subsets. Then, it estimates the importance of model parameters by evaluating their responses to each subset, rewarding parameters exhibiting strong responses to clean data and penalizing those overfitting to noisy data. A class-wise reweighted aggregation strategy is then employed to compute the final importance score, which guides the pruning decisions. Extensive experiments across various models and noise conditions are conducted to demonstrate the efficacy and robustness of our method.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"199 ","pages":"Pages 170-177"},"PeriodicalIF":3.3,"publicationDate":"2025-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145579799","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Decoding attention from the visual cortex: fMRI-based prediction of human saliency maps 从视觉皮层解码注意力:基于功能磁共振成像的人类显著性图预测
IF 3.3 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-11-12 DOI: 10.1016/j.patrec.2025.11.019
Salvatore Calcagno , Marco Finocchiaro , Giovanni Bellitto, Concetto Spampinato, Federica Proietto Salanitri
Modeling visual attention from brain activity offers a powerful route to understanding how spatial salience is encoded in the human visual system. While deep learning models can accurately predict fixations from image content, it remains unclear whether similar saliency maps can be reconstructed directly from neural signals. In this study, we investigate the feasibility of decoding high-resolution spatial attention maps from 3T fMRI data. This study is the first to demonstrate that high-resolution, behaviorally-validated saliency maps can be decoded directly from 3T fMRI signals. We propose a two-stage decoder that transforms multivariate voxel responses from region-specific visual areas into spatial saliency distributions, using DeepGaze II maps as proxy supervision. Evaluation is conducted against new eye-tracking data collected on a held-out set of natural images. Results show that decoded maps significantly correlate with human fixations, particularly when using activity from early visual areas (V1–V4), which contribute most strongly to reconstruction accuracy. Higher-level areas yield above-chance performance but weaker predictions. These findings suggest that spatial attention is robustly represented in early visual cortex and support the use of fMRI-based decoding as a tool for probing the neural basis of salience in naturalistic viewing. Our code and eye-tracking annotations are available on GitHub.
从大脑活动中模拟视觉注意为理解空间显著性如何在人类视觉系统中编码提供了一条强有力的途径。虽然深度学习模型可以准确地预测图像内容的注视,但尚不清楚是否可以直接从神经信号中重建类似的显著性地图。在这项研究中,我们探讨了从3T fMRI数据解码高分辨率空间注意图的可行性。这项研究首次证明了高分辨率、经过行为验证的显著性图可以直接从3T fMRI信号中解码。我们提出了一种两阶段解码器,使用DeepGaze II地图作为代理监督,将区域特定视觉区域的多变量体素响应转换为空间显著性分布。评估是根据在一组自然图像上收集的新的眼动追踪数据进行的。结果表明,解码后的地图与人类注视显著相关,特别是当使用早期视觉区域(V1-V4)的活动时,这对重建精度贡献最大。较高水平的区域产生高于机会的表现,但较弱的预测。这些发现表明,空间注意力在早期视觉皮层中得到了强有力的表征,并支持使用基于fmri的解码作为探索自然观看中显著性的神经基础的工具。我们的代码和眼球追踪注释可以在GitHub上找到。
{"title":"Decoding attention from the visual cortex: fMRI-based prediction of human saliency maps","authors":"Salvatore Calcagno ,&nbsp;Marco Finocchiaro ,&nbsp;Giovanni Bellitto,&nbsp;Concetto Spampinato,&nbsp;Federica Proietto Salanitri","doi":"10.1016/j.patrec.2025.11.019","DOIUrl":"10.1016/j.patrec.2025.11.019","url":null,"abstract":"<div><div>Modeling visual attention from brain activity offers a powerful route to understanding how spatial salience is encoded in the human visual system. While deep learning models can accurately predict fixations from image content, it remains unclear whether similar saliency maps can be reconstructed directly from neural signals. In this study, we investigate the feasibility of decoding high-resolution spatial attention maps from 3T fMRI data. This study is the first to demonstrate that high-resolution, behaviorally-validated saliency maps can be decoded directly from 3T fMRI signals. We propose a two-stage decoder that transforms multivariate voxel responses from region-specific visual areas into spatial saliency distributions, using DeepGaze II maps as proxy supervision. Evaluation is conducted against new eye-tracking data collected on a held-out set of natural images. Results show that decoded maps significantly correlate with human fixations, particularly when using activity from early visual areas (V1–V4), which contribute most strongly to reconstruction accuracy. Higher-level areas yield above-chance performance but weaker predictions. These findings suggest that spatial attention is robustly represented in early visual cortex and support the use of fMRI-based decoding as a tool for probing the neural basis of salience in naturalistic viewing. Our code and eye-tracking annotations are available on <span><span>GitHub</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"199 ","pages":"Pages 156-162"},"PeriodicalIF":3.3,"publicationDate":"2025-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145520476","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Pattern Recognition Letters
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1