首页 > 最新文献

Pattern Recognition Letters最新文献

英文 中文
Anatomical foundation models for brain MRIs 脑mri解剖基础模型
IF 3.3 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-11-14 DOI: 10.1016/j.patrec.2025.11.028
Carlo Alberto Barbano , Matteo Brunello , Benoit Dufumier , Marco Grangetto , Alzheimer’s Disease Neuroimaging Initiative
Deep Learning (DL) in neuroimaging has become increasingly relevant for detecting neurological conditions and neurodegenerative disorders. One of the predominant biomarkers in neuroimaging is represented by brain age, which has been shown to be a good indicator for different conditions, such as Alzheimer’s Disease. Using brain age for weakly supervised pre-training of DL models in transfer learning settings has also recently shown promising results, especially when dealing with data scarcity of different conditions. On the other hand, anatomical information of brain MRIs (e.g. cortical thickness) can provide important information for learning good representations that can be transferred to many downstream tasks. In this work, we propose AnatCL, an anatomical foundation model for structural brain MRIs that (i.) leverages anatomical information in a weakly contrastive learning approach, and (ii.) achieves state-of-the-art performances across many different downstream tasks. To validate our approach we consider 12 different downstream tasks for the diagnosis of different conditions such as Alzheimer’s Disease, autism spectrum disorder, and schizophrenia. Furthermore, we also target the prediction of 10 different clinical assessment scores using structural MRI data. Our findings show that incorporating anatomical information during pre-training leads to more robust and generalizable representations. Pre-trained models can be found at: https://github.com/EIDOSLAB/AnatCL.
神经成像中的深度学习(DL)在检测神经系统疾病和神经退行性疾病方面变得越来越重要。脑年龄是神经影像学的主要生物标志物之一,它已被证明是不同疾病的良好指标,如阿尔茨海默病。在迁移学习设置中使用脑年龄对DL模型进行弱监督预训练最近也显示出有希望的结果,特别是在处理不同条件下的数据稀缺性时。另一方面,脑mri的解剖信息(如皮质厚度)可以为学习良好的表征提供重要信息,这些表征可以转移到许多下游任务中。在这项工作中,我们提出了AnatCL,这是一种结构脑mri的解剖基础模型,它(i)在弱对比学习方法中利用解剖信息,(ii)在许多不同的下游任务中实现最先进的性能。为了验证我们的方法,我们考虑了12种不同的下游任务,用于诊断不同的疾病,如阿尔茨海默病、自闭症谱系障碍和精神分裂症。此外,我们还针对使用结构MRI数据预测10种不同的临床评估评分。我们的研究结果表明,在预训练中加入解剖信息会导致更稳健和可概括的表征。预训练模型可以在https://github.com/EIDOSLAB/AnatCL上找到。
{"title":"Anatomical foundation models for brain MRIs","authors":"Carlo Alberto Barbano ,&nbsp;Matteo Brunello ,&nbsp;Benoit Dufumier ,&nbsp;Marco Grangetto ,&nbsp;Alzheimer’s Disease Neuroimaging Initiative","doi":"10.1016/j.patrec.2025.11.028","DOIUrl":"10.1016/j.patrec.2025.11.028","url":null,"abstract":"<div><div>Deep Learning (DL) in neuroimaging has become increasingly relevant for detecting neurological conditions and neurodegenerative disorders. One of the predominant biomarkers in neuroimaging is represented by brain age, which has been shown to be a good indicator for different conditions, such as Alzheimer’s Disease. Using brain age for weakly supervised pre-training of DL models in transfer learning settings has also recently shown promising results, especially when dealing with data scarcity of different conditions. On the other hand, anatomical information of brain MRIs (e.g. cortical thickness) can provide important information for learning good representations that can be transferred to many downstream tasks. In this work, we propose AnatCL, an anatomical foundation model for structural brain MRIs that (i.) leverages anatomical information in a weakly contrastive learning approach, and (ii.) achieves state-of-the-art performances across many different downstream tasks. To validate our approach we consider 12 different downstream tasks for the diagnosis of different conditions such as Alzheimer’s Disease, autism spectrum disorder, and schizophrenia. Furthermore, we also target the prediction of 10 different clinical assessment scores using structural MRI data. Our findings show that incorporating anatomical information during pre-training leads to more robust and generalizable representations. Pre-trained models can be found at: <span><span>https://github.com/EIDOSLAB/AnatCL</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"199 ","pages":"Pages 178-184"},"PeriodicalIF":3.3,"publicationDate":"2025-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145579801","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Discriminative response pruning for robust and efficient deep networks under label noise 标签噪声下鲁棒高效深度网络的判别响应剪枝
IF 3.3 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-11-13 DOI: 10.1016/j.patrec.2025.11.025
Shuwen Jin, Junzhu Mao, Zeren Sun, Yazhou Yao
Pruning is widely recognized as a promising approach for reducing the computational and storage demands of deep neural networks, facilitating lightweight model deployment on resource-limited devices. However, most existing pruning techniques assume the availability of accurate training labels, overlooking the prevalence of noisy labels in real-world settings. Deep networks have strong memorization capability, making them prone to overfitting noisy labels and thereby sensitive to the removal of network parameters. As a result, existing methods often encounter limitations when directly applied to the task of pruning models trained with noisy labels. To this end, we propose Discriminative Response Pruning (DRP) to robustly prune models trained with noisy labels. Specifically, DRP begins by identifying clean and noisy samples and reorganizing them into class-specific subsets. Then, it estimates the importance of model parameters by evaluating their responses to each subset, rewarding parameters exhibiting strong responses to clean data and penalizing those overfitting to noisy data. A class-wise reweighted aggregation strategy is then employed to compute the final importance score, which guides the pruning decisions. Extensive experiments across various models and noise conditions are conducted to demonstrate the efficacy and robustness of our method.
修剪被广泛认为是一种很有前途的方法,可以减少深度神经网络的计算和存储需求,促进在资源有限的设备上部署轻量级模型。然而,大多数现有的修剪技术假设了准确训练标签的可用性,忽略了现实环境中噪声标签的普遍存在。深度网络具有较强的记忆能力,容易出现噪声标签过拟合,因此对网络参数的去除较为敏感。因此,现有的方法在直接应用于使用噪声标签训练的模型剪枝任务时往往会遇到局限性。为此,我们提出了判别响应剪枝(Discriminative Response Pruning, DRP)来对带有噪声标签训练的模型进行鲁棒剪枝。具体来说,DRP首先识别干净和有噪声的样本,并将它们重新组织成特定类别的子集。然后,它通过评估模型参数对每个子集的响应来估计模型参数的重要性,奖励对干净数据表现出强烈响应的参数,惩罚那些对噪声数据过拟合的参数。然后采用类重加权聚合策略来计算最终的重要性分数,从而指导修剪决策。在各种模型和噪声条件下进行了广泛的实验,以证明我们的方法的有效性和鲁棒性。
{"title":"Discriminative response pruning for robust and efficient deep networks under label noise","authors":"Shuwen Jin,&nbsp;Junzhu Mao,&nbsp;Zeren Sun,&nbsp;Yazhou Yao","doi":"10.1016/j.patrec.2025.11.025","DOIUrl":"10.1016/j.patrec.2025.11.025","url":null,"abstract":"<div><div>Pruning is widely recognized as a promising approach for reducing the computational and storage demands of deep neural networks, facilitating lightweight model deployment on resource-limited devices. However, most existing pruning techniques assume the availability of accurate training labels, overlooking the prevalence of noisy labels in real-world settings. Deep networks have strong memorization capability, making them prone to overfitting noisy labels and thereby sensitive to the removal of network parameters. As a result, existing methods often encounter limitations when directly applied to the task of pruning models trained with noisy labels. To this end, we propose Discriminative Response Pruning (DRP) to robustly prune models trained with noisy labels. Specifically, DRP begins by identifying clean and noisy samples and reorganizing them into class-specific subsets. Then, it estimates the importance of model parameters by evaluating their responses to each subset, rewarding parameters exhibiting strong responses to clean data and penalizing those overfitting to noisy data. A class-wise reweighted aggregation strategy is then employed to compute the final importance score, which guides the pruning decisions. Extensive experiments across various models and noise conditions are conducted to demonstrate the efficacy and robustness of our method.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"199 ","pages":"Pages 170-177"},"PeriodicalIF":3.3,"publicationDate":"2025-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145579799","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Decoding attention from the visual cortex: fMRI-based prediction of human saliency maps 从视觉皮层解码注意力:基于功能磁共振成像的人类显著性图预测
IF 3.3 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-11-12 DOI: 10.1016/j.patrec.2025.11.019
Salvatore Calcagno , Marco Finocchiaro , Giovanni Bellitto, Concetto Spampinato, Federica Proietto Salanitri
Modeling visual attention from brain activity offers a powerful route to understanding how spatial salience is encoded in the human visual system. While deep learning models can accurately predict fixations from image content, it remains unclear whether similar saliency maps can be reconstructed directly from neural signals. In this study, we investigate the feasibility of decoding high-resolution spatial attention maps from 3T fMRI data. This study is the first to demonstrate that high-resolution, behaviorally-validated saliency maps can be decoded directly from 3T fMRI signals. We propose a two-stage decoder that transforms multivariate voxel responses from region-specific visual areas into spatial saliency distributions, using DeepGaze II maps as proxy supervision. Evaluation is conducted against new eye-tracking data collected on a held-out set of natural images. Results show that decoded maps significantly correlate with human fixations, particularly when using activity from early visual areas (V1–V4), which contribute most strongly to reconstruction accuracy. Higher-level areas yield above-chance performance but weaker predictions. These findings suggest that spatial attention is robustly represented in early visual cortex and support the use of fMRI-based decoding as a tool for probing the neural basis of salience in naturalistic viewing. Our code and eye-tracking annotations are available on GitHub.
从大脑活动中模拟视觉注意为理解空间显著性如何在人类视觉系统中编码提供了一条强有力的途径。虽然深度学习模型可以准确地预测图像内容的注视,但尚不清楚是否可以直接从神经信号中重建类似的显著性地图。在这项研究中,我们探讨了从3T fMRI数据解码高分辨率空间注意图的可行性。这项研究首次证明了高分辨率、经过行为验证的显著性图可以直接从3T fMRI信号中解码。我们提出了一种两阶段解码器,使用DeepGaze II地图作为代理监督,将区域特定视觉区域的多变量体素响应转换为空间显著性分布。评估是根据在一组自然图像上收集的新的眼动追踪数据进行的。结果表明,解码后的地图与人类注视显著相关,特别是当使用早期视觉区域(V1-V4)的活动时,这对重建精度贡献最大。较高水平的区域产生高于机会的表现,但较弱的预测。这些发现表明,空间注意力在早期视觉皮层中得到了强有力的表征,并支持使用基于fmri的解码作为探索自然观看中显著性的神经基础的工具。我们的代码和眼球追踪注释可以在GitHub上找到。
{"title":"Decoding attention from the visual cortex: fMRI-based prediction of human saliency maps","authors":"Salvatore Calcagno ,&nbsp;Marco Finocchiaro ,&nbsp;Giovanni Bellitto,&nbsp;Concetto Spampinato,&nbsp;Federica Proietto Salanitri","doi":"10.1016/j.patrec.2025.11.019","DOIUrl":"10.1016/j.patrec.2025.11.019","url":null,"abstract":"<div><div>Modeling visual attention from brain activity offers a powerful route to understanding how spatial salience is encoded in the human visual system. While deep learning models can accurately predict fixations from image content, it remains unclear whether similar saliency maps can be reconstructed directly from neural signals. In this study, we investigate the feasibility of decoding high-resolution spatial attention maps from 3T fMRI data. This study is the first to demonstrate that high-resolution, behaviorally-validated saliency maps can be decoded directly from 3T fMRI signals. We propose a two-stage decoder that transforms multivariate voxel responses from region-specific visual areas into spatial saliency distributions, using DeepGaze II maps as proxy supervision. Evaluation is conducted against new eye-tracking data collected on a held-out set of natural images. Results show that decoded maps significantly correlate with human fixations, particularly when using activity from early visual areas (V1–V4), which contribute most strongly to reconstruction accuracy. Higher-level areas yield above-chance performance but weaker predictions. These findings suggest that spatial attention is robustly represented in early visual cortex and support the use of fMRI-based decoding as a tool for probing the neural basis of salience in naturalistic viewing. Our code and eye-tracking annotations are available on <span><span>GitHub</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"199 ","pages":"Pages 156-162"},"PeriodicalIF":3.3,"publicationDate":"2025-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145520476","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TRIS: A multimodal and multitask framework for unifying text–image retrieval and referring image segmentation TRIS:一个统一文本图像检索和参考图像分割的多模态和多任务框架
IF 3.3 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-11-12 DOI: 10.1016/j.patrec.2025.11.026
Zengzhi Qian , Yulong Sun , Weide Kang , Bingke Zhu , Jinqiao Wang
Existing text–image retrieval methods often underperform due to limited understanding of target objects in both text and images. To address this limitation, we propose TRIS, a multimodal and multitask framework that unifies text–image retrieval and referring image segmentation. TRIS accommodates four distinct text–image retrieval tasks and the referring image segmentation task. Through multitask coupled learning, features of the retrieval and segmentation interact, mutually facilitating multimodal feature learning, thereby enhancing the performance of both tasks. Moreover, by exploiting the masks predicted by the segmentation task, we suggest applying the reranking technique to further enhance the performance of the retrieval task. Simultaneously, capitalizing on the consistency of images in the retrieval task, we propose using consistent loss to improve the target consistency of the segmentation task. Experimentally, we validate the efficacy of the TRIS framework across multiple text–image retrieval and referring image segmentation datasets.
现有的文本图像检索方法由于对文本和图像中目标对象的理解有限,往往表现不佳。为了解决这一限制,我们提出了TRIS,一个多模式和多任务框架,统一了文本图像检索和参考图像分割。TRIS包含四种不同的文本图像检索任务和参考图像分割任务。通过多任务耦合学习,检索和分词的特征相互作用,相互促进多模态特征学习,从而提高两项任务的性能。此外,通过利用分割任务预测的掩码,我们建议应用重排序技术进一步提高检索任务的性能。同时,利用检索任务中图像的一致性,提出使用一致性损失来提高分割任务的目标一致性。实验验证了TRIS框架在多个文本图像检索和参考图像分割数据集上的有效性。
{"title":"TRIS: A multimodal and multitask framework for unifying text–image retrieval and referring image segmentation","authors":"Zengzhi Qian ,&nbsp;Yulong Sun ,&nbsp;Weide Kang ,&nbsp;Bingke Zhu ,&nbsp;Jinqiao Wang","doi":"10.1016/j.patrec.2025.11.026","DOIUrl":"10.1016/j.patrec.2025.11.026","url":null,"abstract":"<div><div>Existing text–image retrieval methods often underperform due to limited understanding of target objects in both text and images. To address this limitation, we propose TRIS, a multimodal and multitask framework that unifies text–image retrieval and referring image segmentation. TRIS accommodates four distinct text–image retrieval tasks and the referring image segmentation task. Through multitask coupled learning, features of the retrieval and segmentation interact, mutually facilitating multimodal feature learning, thereby enhancing the performance of both tasks. Moreover, by exploiting the masks predicted by the segmentation task, we suggest applying the reranking technique to further enhance the performance of the retrieval task. Simultaneously, capitalizing on the consistency of images in the retrieval task, we propose using consistent loss to improve the target consistency of the segmentation task. Experimentally, we validate the efficacy of the TRIS framework across multiple text–image retrieval and referring image segmentation datasets.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"199 ","pages":"Pages 142-148"},"PeriodicalIF":3.3,"publicationDate":"2025-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145520478","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AdaPL: Adaptive Pseudo Labeling for deep active learning in image classification AdaPL:图像分类中深度主动学习的自适应伪标记
IF 3.3 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-11-11 DOI: 10.1016/j.patrec.2025.11.024
Qiang Fang, Xin Xu
Deep supervised learning has achieved remarkable success in many fields, but it often relies on a large amount of annotated data, leading to high costs. An alternative solution is active learning, which aims to enable models to achieve optimal performance with less annotated data. Most standard active learning methods focus on proposing better selection strategies for labeling representative samples while ignoring other unlabeled samples. Inspired by the fact that the reasonable utilization of unlabeled data can improve model performance, we present a novel framework for active learning with pseudo-labeling in this paper. The core of our approach is a novel pseudo-labeling method with an adaptive threshold. Extensive experiments on three typical image classification tasks demonstrate that our approach achieves state-of-the-art performance compared to existing baseline methods. Moreover, our approach is efficient, flexible, and task-agnostic, making it compatible with most standard active learning strategies. Our code will be available at https://github.com/nudtqiangfang/AdaPL.
深度监督学习在许多领域都取得了显著的成功,但它往往依赖于大量的标注数据,导致成本很高。另一种解决方案是主动学习,其目的是使模型在较少注释的数据下实现最佳性能。大多数标准的主动学习方法侧重于提出更好的选择策略来标记代表性样本,而忽略了其他未标记的样本。基于对未标记数据的合理利用可以提高模型性能这一事实的启发,本文提出了一种基于伪标记的主动学习框架。该方法的核心是一种具有自适应阈值的伪标记方法。在三个典型的图像分类任务上进行的大量实验表明,与现有的基线方法相比,我们的方法达到了最先进的性能。此外,我们的方法是高效、灵活和任务不可知的,使其与大多数标准的主动学习策略兼容。我们的代码可以在https://github.com/nudtqiangfang/AdaPL上找到。
{"title":"AdaPL: Adaptive Pseudo Labeling for deep active learning in image classification","authors":"Qiang Fang,&nbsp;Xin Xu","doi":"10.1016/j.patrec.2025.11.024","DOIUrl":"10.1016/j.patrec.2025.11.024","url":null,"abstract":"<div><div>Deep supervised learning has achieved remarkable success in many fields, but it often relies on a large amount of annotated data, leading to high costs. An alternative solution is active learning, which aims to enable models to achieve optimal performance with less annotated data. Most standard active learning methods focus on proposing better selection strategies for labeling representative samples while ignoring other unlabeled samples. Inspired by the fact that the reasonable utilization of unlabeled data can improve model performance, we present a novel framework for active learning with pseudo-labeling in this paper. The core of our approach is a novel pseudo-labeling method with an adaptive threshold. Extensive experiments on three typical image classification tasks demonstrate that our approach achieves state-of-the-art performance compared to existing baseline methods. Moreover, our approach is efficient, flexible, and task-agnostic, making it compatible with most standard active learning strategies. Our code will be available at <span><span>https://github.com/nudtqiangfang/AdaPL</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"199 ","pages":"Pages 185-190"},"PeriodicalIF":3.3,"publicationDate":"2025-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145579800","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DG-DETR: Toward domain generalized detection transformer DG-DETR:面向域广义检测变压器
IF 3.3 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-11-10 DOI: 10.1016/j.patrec.2025.11.023
Seongmin Hwang , Daeyoung Han , Moongu Jeon
End-to-end Transformer-based detectors (DETRs) have demonstrated strong detection performance. However, domain generalization (DG) research has primarily focused on convolutional neural network (CNN)-based detectors, while paying little attention to enhancing the robustness of DETRs. In this letter, we introduce a Domain Generalized DEtection TRansformer (DG-DETR), a simple, effective, and plug-and-play method that improves out-of-distribution (OOD) robustness for DETRs. Specifically, we propose a novel domain-agnostic query selection strategy that removes domain-induced biases from object queries via orthogonal projection onto the instance-specific style space. Additionally, we leverage a wavelet decomposition to disentangle features into domain-invariant and domain-specific components, enabling synthesis of diverse latent styles while preserving the semantic features of objects. Experimental results validate the effectiveness of DG-DETR. Our code is available at https://github.com/smin-hwang/DG-DETR.
基于端到端变压器的检测器(DETRs)已经证明了强大的检测性能。然而,领域泛化(DG)的研究主要集中在基于卷积神经网络(CNN)的检测器上,而对增强der的鲁棒性关注较少。在这篇文章中,我们介绍了一种域广义检测变压器(DG-DETR),这是一种简单、有效、即插即用的方法,可以提高detr的分布外(OOD)鲁棒性。具体来说,我们提出了一种新的领域不可知的查询选择策略,该策略通过正交投影到特定于实例的样式空间来消除对象查询中领域引起的偏差。此外,我们利用小波分解将特征分解为领域不变和领域特定的组件,从而在保留对象的语义特征的同时合成各种潜在风格。实验结果验证了DG-DETR算法的有效性。我们的代码可在https://github.com/smin-hwang/DG-DETR上获得。
{"title":"DG-DETR: Toward domain generalized detection transformer","authors":"Seongmin Hwang ,&nbsp;Daeyoung Han ,&nbsp;Moongu Jeon","doi":"10.1016/j.patrec.2025.11.023","DOIUrl":"10.1016/j.patrec.2025.11.023","url":null,"abstract":"<div><div>End-to-end Transformer-based detectors (DETRs) have demonstrated strong detection performance. However, domain generalization (DG) research has primarily focused on convolutional neural network (CNN)-based detectors, while paying little attention to enhancing the robustness of DETRs. In this letter, we introduce a Domain Generalized DEtection TRansformer (DG-DETR), a simple, effective, and plug-and-play method that improves out-of-distribution (OOD) robustness for DETRs. Specifically, we propose a novel domain-agnostic query selection strategy that removes domain-induced biases from object queries via orthogonal projection onto the instance-specific style space. Additionally, we leverage a wavelet decomposition to disentangle features into domain-invariant and domain-specific components, enabling synthesis of diverse latent styles while preserving the semantic features of objects. Experimental results validate the effectiveness of DG-DETR. Our code is available at <span><span>https://github.com/smin-hwang/DG-DETR</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"199 ","pages":"Pages 128-134"},"PeriodicalIF":3.3,"publicationDate":"2025-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145520479","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Lightweight adaptive spatiotemporal information fusion network for medical time series classification 用于医学时间序列分类的轻量级自适应时空信息融合网络
IF 3.3 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-11-10 DOI: 10.1016/j.patrec.2025.11.021
Fan Yang , Anping Zeng , Chunlin He , Chaorong Li , Xingjie Wang , Shijie Xu
Medical time series (MedTS) data, such as electroencephalography (EEG) and electrocardiography (ECG), play a crucial role in monitoring physiological signals and diagnosing neurological and cardiovascular conditions. While deep learning methods have achieved notable success in general time series classification, they often struggle to effectively capture the unique spatiotemporal dependencies inherent in clinical MedTS data. Additionally, their high computational complexity and lack of interpretability hinder practical deployment in healthcare settings. To address these challenges, we propose ASTIFNet, a lightweight Adaptive SpatioTemporal Information Fusion Network. The framework first employs a cross-channel fusion mechanism and multi-granularity feature extraction to hierarchically model spatiotemporal patterns. Next, a variance-based attention module is incorporated to dynamically focus on clinically relevant features while minimizing computational overhead. Finally, the model preserves the original time-series structure through feature-map-based processing, enabling transparent decision-making with post-hoc interpretability. Experiments on four public benchmarks demonstrate that ASTIFNet matches state-of-the-art performance while requiring fewer than 10KB of parameters.
医学时间序列(MedTS)数据,如脑电图(EEG)和心电图(ECG),在监测生理信号和诊断神经和心血管疾病方面发挥着至关重要的作用。虽然深度学习方法在一般时间序列分类方面取得了显著的成功,但它们往往难以有效地捕获临床MedTS数据中固有的独特时空依赖性。此外,它们的高计算复杂性和缺乏可解释性阻碍了在医疗保健环境中的实际部署。为了应对这些挑战,我们提出了一种轻量级的自适应时空信息融合网络ASTIFNet。该框架首先采用跨通道融合机制和多粒度特征提取对时空模式进行分层建模;接下来,结合基于方差的注意力模块来动态关注临床相关特征,同时最小化计算开销。最后,该模型通过基于特征图的处理保留了原始时间序列结构,实现了具有事后可解释性的透明决策。在四个公共基准测试上的实验表明,ASTIFNet在需要少于10KB的参数的情况下达到了最先进的性能。
{"title":"Lightweight adaptive spatiotemporal information fusion network for medical time series classification","authors":"Fan Yang ,&nbsp;Anping Zeng ,&nbsp;Chunlin He ,&nbsp;Chaorong Li ,&nbsp;Xingjie Wang ,&nbsp;Shijie Xu","doi":"10.1016/j.patrec.2025.11.021","DOIUrl":"10.1016/j.patrec.2025.11.021","url":null,"abstract":"<div><div>Medical time series (MedTS) data, such as electroencephalography (EEG) and electrocardiography (ECG), play a crucial role in monitoring physiological signals and diagnosing neurological and cardiovascular conditions. While deep learning methods have achieved notable success in general time series classification, they often struggle to effectively capture the unique spatiotemporal dependencies inherent in clinical MedTS data. Additionally, their high computational complexity and lack of interpretability hinder practical deployment in healthcare settings. To address these challenges, we propose ASTIFNet, a lightweight Adaptive SpatioTemporal Information Fusion Network. The framework first employs a cross-channel fusion mechanism and multi-granularity feature extraction to hierarchically model spatiotemporal patterns. Next, a variance-based attention module is incorporated to dynamically focus on clinically relevant features while minimizing computational overhead. Finally, the model preserves the original time-series structure through feature-map-based processing, enabling transparent decision-making with post-hoc interpretability. Experiments on four public benchmarks demonstrate that ASTIFNet matches state-of-the-art performance while requiring fewer than 10KB of parameters.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"199 ","pages":"Pages 75-81"},"PeriodicalIF":3.3,"publicationDate":"2025-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145520492","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cross-modality white matter lesion segmentation by modality de-indentification 基于模态去识别的跨模态白质病变分割
IF 3.3 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-11-10 DOI: 10.1016/j.patrec.2025.11.020
Domen Preložnik, Žiga Špiclin
Multiple sclerosis (MS) diagnosis and prognosis relies heavily on the accurate detection and segmentation of white matter lesions (WML) in magnetic resonance imaging (MRI). Different MRI sequences, particularly Fluid-Attenuated Inversion Recovery (FLAIR) and Double Inversion Recovery (DIR), offer complementary information about lesions but are rarely simultaneously acquired in clinical imaging protocols. We introduce a novel self-supervised modality sequential unlearning (SSMSU) adaptation technique that employs modality de-identification to extract modality-invariant features from MRI images, improving WML segmentation regardless of the input modality. Building upon the public nnU-Net framework, we introduce auxiliary modality classifiers at each resolution level and utilize confusion loss to explicitly suppress the modality-specific features while training on alternating modality inputs. We evaluated the approach on in-house dataset of 28 MS patients with paired FLAIR and DIR, MSSEG 2016 dataset of 53 subjects with paired FLAIR and proton density (DP), and 22 FLAIR test cases of MSLesSeg 2024. All cases had expert-annotated WML segmentation as reference. Experiments involved within- and between-dataset validation, comparing performances of single- and multi-modality single-channel, and multi-modality multi-channel training strategies based on Dice Similarity Coefficient (DSC), Lesion-wise True Positive Rate (LTPR), and Lesion-wise False Discovery Rate (LFDR). On in-house and MSSEG 2016 the SSMSU achieved best DSC and LTPR among single-channel models, with LFDR levels comparable to best values, while it attained the same level of performance to multi-channel models that required paired FLAIR/DIR or FLAIR/DP modalities. It ranked 2nd among single-channel methods on MSLesSeg 2024. Effectively suppressing modality-related information resulted in a technique that is cross-modal and delivers a flexible and robust automated WML segmentation tool.
多发性硬化症(MS)的诊断和预后在很大程度上依赖于磁共振成像(MRI)对白质病变(WML)的准确检测和分割。不同的MRI序列,特别是液体衰减反转恢复(FLAIR)和双重反转恢复(DIR),提供了关于病变的补充信息,但在临床成像方案中很少同时获得。我们引入了一种新的自监督模态顺序学习(SSMSU)自适应技术,该技术利用模态去识别从MRI图像中提取模态不变的特征,从而提高了WML分割,而不管输入模态如何。在公共nnU-Net框架的基础上,我们在每个分辨率级别引入了辅助的情态分类器,并利用混淆损失在交替情态输入的训练中显式地抑制情态特定的特征。我们对28例具有配对FLAIR和DIR的MS患者的内部数据集,53例具有配对FLAIR和质子密度(DP)的MSSEG 2016数据集以及22例MSLesSeg 2024的FLAIR测试例进行了评估。所有案例均以专家标注的WML分割作为参考。实验涉及数据集内部和数据集之间的验证,比较基于骰子相似系数(DSC),病变真阳性率(LTPR)和病变假发现率(LFDR)的单、多模态单通道和多模态多通道训练策略的性能。在内部和MSSEG 2016中,SSMSU在单通道模型中实现了最佳DSC和LTPR,其LFDR水平可与最佳值相媲美,而它与需要配对FLAIR/DIR或FLAIR/DP模式的多通道模型达到了相同的性能水平。它在MSLesSeg 2024上的单通道方法中排名第二。有效地抑制与模态相关的信息产生了一种跨模态的技术,并提供了灵活而健壮的自动化WML分割工具。
{"title":"Cross-modality white matter lesion segmentation by modality de-indentification","authors":"Domen Preložnik,&nbsp;Žiga Špiclin","doi":"10.1016/j.patrec.2025.11.020","DOIUrl":"10.1016/j.patrec.2025.11.020","url":null,"abstract":"<div><div>Multiple sclerosis (MS) diagnosis and prognosis relies heavily on the accurate detection and segmentation of white matter lesions (WML) in magnetic resonance imaging (MRI). Different MRI sequences, particularly Fluid-Attenuated Inversion Recovery (FLAIR) and Double Inversion Recovery (DIR), offer complementary information about lesions but are rarely simultaneously acquired in clinical imaging protocols. We introduce a novel self-supervised modality sequential unlearning (SSMSU) adaptation technique that employs <em>modality de-identification</em> to extract modality-invariant features from MRI images, improving WML segmentation regardless of the input modality. Building upon the public nnU-Net framework, we introduce auxiliary modality classifiers at each resolution level and utilize confusion loss to explicitly suppress the modality-specific features while training on alternating modality inputs. We evaluated the approach on in-house dataset of 28 MS patients with paired FLAIR and DIR, MSSEG 2016 dataset of 53 subjects with paired FLAIR and proton density (DP), and 22 FLAIR test cases of MSLesSeg 2024. All cases had expert-annotated WML segmentation as reference. Experiments involved within- and between-dataset validation, comparing performances of single- and multi-modality single-channel, and multi-modality multi-channel training strategies based on Dice Similarity Coefficient (DSC), Lesion-wise True Positive Rate (LTPR), and Lesion-wise False Discovery Rate (LFDR). On in-house and MSSEG 2016 the SSMSU achieved best DSC and LTPR among single-channel models, with LFDR levels comparable to best values, while it attained the same level of performance to multi-channel models that required paired FLAIR/DIR or FLAIR/DP modalities. It ranked 2nd among single-channel methods on MSLesSeg 2024. Effectively suppressing modality-related information resulted in a technique that is cross-modal and delivers a flexible and robust automated WML segmentation tool.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"199 ","pages":"Pages 120-127"},"PeriodicalIF":3.3,"publicationDate":"2025-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145520471","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Transformer-based dynamic cell bounding box refinement for end-to-end Table Structure Recognition 基于变压器的端到端表结构识别的动态单元格边界盒细化
IF 3.3 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-11-10 DOI: 10.1016/j.patrec.2025.11.011
Yang Xue, Haosheng Cai, Zhuoming Li, Lianwen Jin
Table Structure Recognition (TSR) can adopt image-to-sequence solutions to predict both logical and physical structure simultaneously. However, while these models excel at identifying the logical structure, they often struggle with accurate cell detection. To address this challenge, we propose a Transformer-based Dynamic cell bounding Box refinement for end-to-end TSR, named DynamicBoxTransformer. Specifically, we incorporate a cell bounding box regression decoder, which takes the output of the HTML sequence decoder as input. The cell regression decoder uses reference bounding box coordinates to create spatial queries that provide explicit guidance to key areas and enhance the accuracy of cell bounding boxes layer by layer. To mitigate error accumulation, we introduce denoising training, particularly focusing on the offset of rows and columns. In addition, we design masks that enable the model to make full use of contextual information. Experimental results show that our DynamicBoxTransformer achieves competitive performance on natural scene table datasets. Compared to previous image-to-sequence approaches, DynamicBoxTransformer demonstrates significant improvements in accurate cell detection.
表结构识别(TSR)可以采用图像到序列的方法同时预测逻辑结构和物理结构。然而,虽然这些模型在识别逻辑结构方面表现出色,但它们往往难以准确地检测细胞。为了解决这一挑战,我们提出了一个基于transformer的端到端TSR动态单元边界盒改进,命名为DynamicBoxTransformer。具体来说,我们结合了一个单元格边界框回归解码器,它将HTML序列解码器的输出作为输入。单元格回归解码器使用参考边界框坐标创建空间查询,为关键区域提供明确的指导,并逐层提高单元格边界框的准确性。为了减少误差积累,我们引入去噪训练,特别关注行和列的偏移量。此外,我们还设计了遮罩,使模型能够充分利用上下文信息。实验结果表明,DynamicBoxTransformer在自然场景表数据集上取得了较好的性能。与以前的图像到序列方法相比,DynamicBoxTransformer在准确的细胞检测方面有了显着改进。
{"title":"Transformer-based dynamic cell bounding box refinement for end-to-end Table Structure Recognition","authors":"Yang Xue,&nbsp;Haosheng Cai,&nbsp;Zhuoming Li,&nbsp;Lianwen Jin","doi":"10.1016/j.patrec.2025.11.011","DOIUrl":"10.1016/j.patrec.2025.11.011","url":null,"abstract":"<div><div>Table Structure Recognition (TSR) can adopt image-to-sequence solutions to predict both logical and physical structure simultaneously. However, while these models excel at identifying the logical structure, they often struggle with accurate cell detection. To address this challenge, we propose a Transformer-based Dynamic cell bounding Box refinement for end-to-end TSR, named DynamicBoxTransformer. Specifically, we incorporate a cell bounding box regression decoder, which takes the output of the HTML sequence decoder as input. The cell regression decoder uses reference bounding box coordinates to create spatial queries that provide explicit guidance to key areas and enhance the accuracy of cell bounding boxes layer by layer. To mitigate error accumulation, we introduce denoising training, particularly focusing on the offset of rows and columns. In addition, we design masks that enable the model to make full use of contextual information. Experimental results show that our DynamicBoxTransformer achieves competitive performance on natural scene table datasets. Compared to previous image-to-sequence approaches, DynamicBoxTransformer demonstrates significant improvements in accurate cell detection.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"199 ","pages":"Pages 106-112"},"PeriodicalIF":3.3,"publicationDate":"2025-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145520473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unsupervised contrastive analysis for anomaly detection in brain MRIs via conditional diffusion models 基于条件扩散模型的脑mri异常检测的无监督对比分析
IF 3.3 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-11-08 DOI: 10.1016/j.patrec.2025.11.014
Cristiano Patrício , Carlo Alberto Barbano , Attilio Fiandrotti , Riccardo Renzulli , Marco Grangetto , Luís F. Teixeira , João C. Neves
Contrastive Analysis (CA) detects anomalies by contrasting patterns unique to a target group (e.g., unhealthy subjects) from those in a background group (e.g., healthy subjects). In the context of brain MRIs, existing CA approaches rely on supervised contrastive learning or variational autoencoders (VAEs) using both healthy and unhealthy data, but such reliance on target samples is challenging in clinical settings. Unsupervised Anomaly Detection (UAD) learns a reference representation of healthy anatomy, eliminating the need for target samples. Deviations from this reference distribution can indicate potential anomalies. In this context, diffusion models have been increasingly adopted in UAD due to their superior performance in image generation compared to VAEs. Nonetheless, precisely reconstructing the anatomy of the brain remains a challenge. In this work, we bridge CA and UAD by reformulating contrastive analysis principles for the unsupervised setting. We propose an unsupervised framework to improve the reconstruction quality by training a self-supervised contrastive encoder on healthy images to extract meaningful anatomical features. These features are used to condition a diffusion model to reconstruct the healthy appearance of a given image, enabling interpretable anomaly localization via pixel-wise comparison. We validate our approach through a proof-of-concept on a facial image dataset and further demonstrate its effectiveness on four brain MRI datasets, outperforming baseline methods in anomaly localization on the NOVA benchmark.
对比分析(CA)通过对比目标组(例如,不健康受试者)与背景组(例如,健康受试者)特有的模式来检测异常。在脑mri的背景下,现有的CA方法依赖于使用健康和不健康数据的监督对比学习或变分自编码器(VAEs),但这种对目标样本的依赖在临床环境中是具有挑战性的。无监督异常检测(UAD)学习健康解剖结构的参考表示,消除了对目标样本的需要。偏离这个参考分布可以表明潜在的异常。在这种情况下,由于扩散模型在图像生成方面优于VAEs,因此在UAD中越来越多地采用扩散模型。尽管如此,精确地重建大脑的解剖结构仍然是一个挑战。在这项工作中,我们通过重新制定无监督设置的对比分析原则,架起了CA和UAD的桥梁。我们提出了一种无监督框架,通过在健康图像上训练自监督对比编码器来提取有意义的解剖特征,从而提高重建质量。这些特征用于调节扩散模型,以重建给定图像的健康外观,从而通过逐像素比较实现可解释的异常定位。我们通过面部图像数据集的概念验证验证了我们的方法,并进一步证明了它在四个脑MRI数据集上的有效性,在NOVA基准上的异常定位优于基线方法。
{"title":"Unsupervised contrastive analysis for anomaly detection in brain MRIs via conditional diffusion models","authors":"Cristiano Patrício ,&nbsp;Carlo Alberto Barbano ,&nbsp;Attilio Fiandrotti ,&nbsp;Riccardo Renzulli ,&nbsp;Marco Grangetto ,&nbsp;Luís F. Teixeira ,&nbsp;João C. Neves","doi":"10.1016/j.patrec.2025.11.014","DOIUrl":"10.1016/j.patrec.2025.11.014","url":null,"abstract":"<div><div>Contrastive Analysis (CA) detects anomalies by contrasting patterns unique to a target group (e.g., unhealthy subjects) from those in a background group (e.g., healthy subjects). In the context of brain MRIs, existing CA approaches rely on supervised contrastive learning or variational autoencoders (VAEs) using both healthy and unhealthy data, but such reliance on target samples is challenging in clinical settings. Unsupervised Anomaly Detection (UAD) learns a reference representation of healthy anatomy, eliminating the need for target samples. Deviations from this reference distribution can indicate potential anomalies. In this context, diffusion models have been increasingly adopted in UAD due to their superior performance in image generation compared to VAEs. Nonetheless, precisely reconstructing the anatomy of the brain remains a challenge. In this work, we bridge CA and UAD by reformulating contrastive analysis principles for the unsupervised setting. We propose an unsupervised framework to improve the reconstruction quality by training a self-supervised contrastive encoder on healthy images to extract meaningful anatomical features. These features are used to condition a diffusion model to reconstruct the healthy appearance of a given image, enabling interpretable anomaly localization via pixel-wise comparison. We validate our approach through a proof-of-concept on a facial image dataset and further demonstrate its effectiveness on four brain MRI datasets, outperforming baseline methods in anomaly localization on the NOVA benchmark.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"199 ","pages":"Pages 82-89"},"PeriodicalIF":3.3,"publicationDate":"2025-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145520493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Pattern Recognition Letters
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1