首页 > 最新文献

Pattern Recognition Letters最新文献

英文 中文
Drug response prediction: A critical systematic review of current datasets and methods 药物反应预测:对当前数据集和方法的重要系统回顾
IF 3.3 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-01 Epub Date: 2025-10-30 DOI: 10.1016/j.patrec.2025.10.016
Nguyen Khoa Tran, Gunnar W. Klau
Predicting drug response is a critical task in personalized medicine. Several recent studies have reported promising improvements in predictive performance with deep learning models trained on molecular characterizations of cell lines and drugs. However, our baseline tests suggest that little to no meaningful biological or chemical information is being learned from multi-omics data in the publicly available large-scale datasets GDSC and DepMap Public or molecular graphs, respectively. In our experiments, even gene expression data, commonly regarded as highly predictive, failed to deliver satisfactory drug response predictions. This raises the possibility that drug response measures or patterns observed in multi-omics data may not arise from underlying biological mechanisms. To investigate this, we identified and examined inconsistencies within and across the GDSC2 and DepMap Public 24Q2 datasets. We found that IC50 and AUC values of replicated experiments in GDSC2 had an average Pearson correlation coefficient of only 0.563±0.230 and 0.468±0.358, respectively. Additionally, somatic mutations shared between cell lines in the two datasets showed a Pearson correlation coefficient of only 0.180. Even in cases where TGSA, the current best-performing method to our knowledge, exceeded baseline performance, it still did not surpass a simple baseline multi-output multilayer perceptron (MMLP). Moreover, MMLP is not only more easily adaptable to new datasets but also significantly faster, making it a viable baseline for comparisons. In conclusion, our findings suggest that current cell-line and drug data are insufficient for existing modeling approaches to effectively uncover the biological and chemical mechanisms underlying drug response. Therefore, improving data quality or focusing on different data types is crucial before proposing novel methods.
预测药物反应是个体化医疗的一项关键任务。最近的几项研究报告称,通过对细胞系和药物的分子特征进行训练,深度学习模型有望提高预测性能。然而,我们的基线测试表明,从公共可获得的大规模数据集GDSC和DepMap Public或分子图中的多组学数据中,几乎没有学到有意义的生物或化学信息。在我们的实验中,即使是通常被认为具有高度预测性的基因表达数据,也未能提供令人满意的药物反应预测。这提出了在多组学数据中观察到的药物反应措施或模式可能不是来自潜在的生物学机制的可能性。为了调查这一点,我们确定并检查了GDSC2和DepMap公共24Q2数据集内部和之间的不一致性。我们发现GDSC2中重复实验的IC50和AUC值的平均Pearson相关系数分别仅为0.563±0.230和0.468±0.358。此外,两个数据集中细胞系之间共有的体细胞突变显示Pearson相关系数仅为0.180。即使在TGSA(据我们所知目前性能最好的方法)超过基线性能的情况下,它仍然没有超过简单的基线多输出多层感知器(MMLP)。此外,MMLP不仅更容易适应新的数据集,而且速度也快得多,使其成为比较的可行基准。总之,我们的研究结果表明,目前的细胞系和药物数据不足以使现有的建模方法有效地揭示药物反应背后的生物和化学机制。因此,在提出新方法之前,提高数据质量或关注不同的数据类型是至关重要的。
{"title":"Drug response prediction: A critical systematic review of current datasets and methods","authors":"Nguyen Khoa Tran,&nbsp;Gunnar W. Klau","doi":"10.1016/j.patrec.2025.10.016","DOIUrl":"10.1016/j.patrec.2025.10.016","url":null,"abstract":"<div><div>Predicting drug response is a critical task in personalized medicine. Several recent studies have reported promising improvements in predictive performance with deep learning models trained on molecular characterizations of cell lines and drugs. However, our baseline tests suggest that little to no meaningful biological or chemical information is being learned from multi-omics data in the publicly available large-scale datasets GDSC and DepMap Public or molecular graphs, respectively. In our experiments, even gene expression data, commonly regarded as highly predictive, failed to deliver satisfactory drug response predictions. This raises the possibility that drug response measures or patterns observed in multi-omics data may not arise from underlying biological mechanisms. To investigate this, we identified and examined inconsistencies within and across the GDSC2 and DepMap Public 24Q2 datasets. We found that IC<sub>50</sub> and AUC values of replicated experiments in GDSC2 had an average Pearson correlation coefficient of only <span><math><mrow><mn>0</mn><mo>.</mo><mn>563</mn><mo>±</mo><mn>0</mn><mo>.</mo><mn>230</mn></mrow></math></span> and <span><math><mrow><mn>0</mn><mo>.</mo><mn>468</mn><mo>±</mo><mn>0</mn><mo>.</mo><mn>358</mn></mrow></math></span>, respectively. Additionally, somatic mutations shared between cell lines in the two datasets showed a Pearson correlation coefficient of only 0.180. Even in cases where TGSA, the current best-performing method to our knowledge, exceeded baseline performance, it still did not surpass a simple baseline multi-output multilayer perceptron (MMLP). Moreover, MMLP is not only more easily adaptable to new datasets but also significantly faster, making it a viable baseline for comparisons. In conclusion, our findings suggest that current cell-line and drug data are insufficient for existing modeling approaches to effectively uncover the biological and chemical mechanisms underlying drug response. Therefore, improving data quality or focusing on different data types is crucial before proposing novel methods.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"199 ","pages":"Pages 21-26"},"PeriodicalIF":3.3,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145420253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ADFNeT: Adaptive decomposition and fusion for color constancy ADFNeT:色彩稳定性的自适应分解和融合
IF 3.3 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-01 Epub Date: 2025-10-28 DOI: 10.1016/j.patrec.2025.10.006
Zhuo-Ming Du , Hong-An Li , Qian Yu
Achieving color constancy is a critical yet challenging task, requiring the estimation of global illumination from a single RGB image to remove color casts caused by non-standard lighting. This paper introduces ADFNet (Adaptive Decomposition and Fusion Network), an end-to-end framework comprising two key modules: ADCL (Adaptive Decomposition and Coefficient Learning) and SWIP (Semantic Weighting for Illumination Prediction). ADCL decomposes the input image into three interpretable components (Mean Intensity, Variation Magnitude, and Variation Direction), while jointly learning adaptive weights and offsets for accurate recomposition. These components are fused into an HDR-like representation via an Adaptive Fusion Module. SWIP further refines this representation through semantic-aware weighting and predicts the global illumination using a lightweight convolutional network. Extensive experiments demonstrate that ADFNet achieves state-of-the-art accuracy and robustness, highlighting its potential for real-world applications such as photographic enhancement and vision-based perception systems.
实现颜色恒定是一项关键而具有挑战性的任务,需要从单个RGB图像中估计全局照明,以消除由非标准照明引起的偏色。本文介绍了自适应分解和融合网络(ADFNet),这是一个端到端框架,包括两个关键模块:ADCL(自适应分解和系数学习)和SWIP(照明预测的语义加权)。ADCL将输入图像分解为三个可解释的分量(Mean Intensity、Variation Magnitude和Variation Direction),同时共同学习自适应权重和偏移量,以实现精确的重组。这些组件通过自适应融合模块融合成类似hdr的表示。SWIP通过语义感知加权进一步细化这种表示,并使用轻量级卷积网络预测全局照明。大量的实验表明,ADFNet达到了最先进的精度和鲁棒性,突出了其在现实世界中的应用潜力,如照片增强和基于视觉的感知系统。
{"title":"ADFNeT: Adaptive decomposition and fusion for color constancy","authors":"Zhuo-Ming Du ,&nbsp;Hong-An Li ,&nbsp;Qian Yu","doi":"10.1016/j.patrec.2025.10.006","DOIUrl":"10.1016/j.patrec.2025.10.006","url":null,"abstract":"<div><div>Achieving color constancy is a critical yet challenging task, requiring the estimation of global illumination from a single RGB image to remove color casts caused by non-standard lighting. This paper introduces ADFNet (Adaptive Decomposition and Fusion Network), an end-to-end framework comprising two key modules: ADCL (Adaptive Decomposition and Coefficient Learning) and SWIP (Semantic Weighting for Illumination Prediction). ADCL decomposes the input image into three interpretable components (Mean Intensity, Variation Magnitude, and Variation Direction), while jointly learning adaptive weights and offsets for accurate recomposition. These components are fused into an HDR-like representation via an Adaptive Fusion Module. SWIP further refines this representation through semantic-aware weighting and predicts the global illumination using a lightweight convolutional network. Extensive experiments demonstrate that ADFNet achieves state-of-the-art accuracy and robustness, highlighting its potential for real-world applications such as photographic enhancement and vision-based perception systems.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"199 ","pages":"Pages 1-6"},"PeriodicalIF":3.3,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145384597","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DG-DETR: Toward domain generalized detection transformer DG-DETR:面向域广义检测变压器
IF 3.3 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-01 Epub Date: 2025-11-10 DOI: 10.1016/j.patrec.2025.11.023
Seongmin Hwang , Daeyoung Han , Moongu Jeon
End-to-end Transformer-based detectors (DETRs) have demonstrated strong detection performance. However, domain generalization (DG) research has primarily focused on convolutional neural network (CNN)-based detectors, while paying little attention to enhancing the robustness of DETRs. In this letter, we introduce a Domain Generalized DEtection TRansformer (DG-DETR), a simple, effective, and plug-and-play method that improves out-of-distribution (OOD) robustness for DETRs. Specifically, we propose a novel domain-agnostic query selection strategy that removes domain-induced biases from object queries via orthogonal projection onto the instance-specific style space. Additionally, we leverage a wavelet decomposition to disentangle features into domain-invariant and domain-specific components, enabling synthesis of diverse latent styles while preserving the semantic features of objects. Experimental results validate the effectiveness of DG-DETR. Our code is available at https://github.com/smin-hwang/DG-DETR.
基于端到端变压器的检测器(DETRs)已经证明了强大的检测性能。然而,领域泛化(DG)的研究主要集中在基于卷积神经网络(CNN)的检测器上,而对增强der的鲁棒性关注较少。在这篇文章中,我们介绍了一种域广义检测变压器(DG-DETR),这是一种简单、有效、即插即用的方法,可以提高detr的分布外(OOD)鲁棒性。具体来说,我们提出了一种新的领域不可知的查询选择策略,该策略通过正交投影到特定于实例的样式空间来消除对象查询中领域引起的偏差。此外,我们利用小波分解将特征分解为领域不变和领域特定的组件,从而在保留对象的语义特征的同时合成各种潜在风格。实验结果验证了DG-DETR算法的有效性。我们的代码可在https://github.com/smin-hwang/DG-DETR上获得。
{"title":"DG-DETR: Toward domain generalized detection transformer","authors":"Seongmin Hwang ,&nbsp;Daeyoung Han ,&nbsp;Moongu Jeon","doi":"10.1016/j.patrec.2025.11.023","DOIUrl":"10.1016/j.patrec.2025.11.023","url":null,"abstract":"<div><div>End-to-end Transformer-based detectors (DETRs) have demonstrated strong detection performance. However, domain generalization (DG) research has primarily focused on convolutional neural network (CNN)-based detectors, while paying little attention to enhancing the robustness of DETRs. In this letter, we introduce a Domain Generalized DEtection TRansformer (DG-DETR), a simple, effective, and plug-and-play method that improves out-of-distribution (OOD) robustness for DETRs. Specifically, we propose a novel domain-agnostic query selection strategy that removes domain-induced biases from object queries via orthogonal projection onto the instance-specific style space. Additionally, we leverage a wavelet decomposition to disentangle features into domain-invariant and domain-specific components, enabling synthesis of diverse latent styles while preserving the semantic features of objects. Experimental results validate the effectiveness of DG-DETR. Our code is available at <span><span>https://github.com/smin-hwang/DG-DETR</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"199 ","pages":"Pages 128-134"},"PeriodicalIF":3.3,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145520479","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Boosting neural network performance for high dimensional data through random projections 通过随机投影提高神经网络处理高维数据的性能
IF 3.3 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-01 Epub Date: 2025-11-07 DOI: 10.1016/j.patrec.2025.11.006
Panagiotis Anangnostou , Sotiris K. Tasoulis , Aristidis G. Vrahatis , Spiros V. Georgakopoulos , Vassilis P. Plagianakos
Advancements in molecular Biology have driven a paradigm shift in disease understanding, particularly with the rise of precision Medicine enabled by single-cell studies. These datasets are typically high-dimensional, posing several computational challenges. They are often sensitive to noise, and their sparse structure can lead to issues with overfitting. While Deep Neural Networks can address many of these limitations, they still struggle with high-dimensional tabular data. In response to these challenges, we propose a novel framework that leverages the concept of data augmentation for high-dimensional tabular data. By augmenting the samples and simultaneously reducing their dimensionality, we create a balanced environment that improves Deep Neural Network performance. This augmentation is achieved through the Random Projection method, combined with a stochastic filtering process for the randomly projected spaces. We validate our approach on several scRNA-seq datasets, showing that it not only enhances Deep Neural Network performance, but also outperforms state-of-the-art scRNA-seq classifiers. The proposed methodology offers new opportunities for addressing “small n, large p” problems across diverse domains.
分子生物学的进步推动了疾病理解的范式转变,特别是随着单细胞研究带来的精准医学的兴起。这些数据集通常是高维的,带来了一些计算上的挑战。它们通常对噪声敏感,并且它们的稀疏结构可能导致过拟合问题。虽然深度神经网络可以解决许多这些限制,但它们仍然难以处理高维表格数据。为了应对这些挑战,我们提出了一个利用高维表格数据的数据增强概念的新框架。通过增加样本并同时降低它们的维数,我们创建了一个平衡的环境,提高了深度神经网络的性能。这种增强是通过随机投影方法,结合随机投影空间的随机滤波过程来实现的。我们在几个scRNA-seq数据集上验证了我们的方法,表明它不仅提高了深度神经网络的性能,而且优于最先进的scRNA-seq分类器。提出的方法为解决不同领域的“小n,大p”问题提供了新的机会。
{"title":"Boosting neural network performance for high dimensional data through random projections","authors":"Panagiotis Anangnostou ,&nbsp;Sotiris K. Tasoulis ,&nbsp;Aristidis G. Vrahatis ,&nbsp;Spiros V. Georgakopoulos ,&nbsp;Vassilis P. Plagianakos","doi":"10.1016/j.patrec.2025.11.006","DOIUrl":"10.1016/j.patrec.2025.11.006","url":null,"abstract":"<div><div>Advancements in molecular Biology have driven a paradigm shift in disease understanding, particularly with the rise of precision Medicine enabled by single-cell studies. These datasets are typically high-dimensional, posing several computational challenges. They are often sensitive to noise, and their sparse structure can lead to issues with overfitting. While Deep Neural Networks can address many of these limitations, they still struggle with high-dimensional tabular data. In response to these challenges, we propose a novel framework that leverages the concept of data augmentation for high-dimensional tabular data. By augmenting the samples and simultaneously reducing their dimensionality, we create a balanced environment that improves Deep Neural Network performance. This augmentation is achieved through the Random Projection method, combined with a stochastic filtering process for the randomly projected spaces. We validate our approach on several scRNA-seq datasets, showing that it not only enhances Deep Neural Network performance, but also outperforms state-of-the-art scRNA-seq classifiers. The proposed methodology offers new opportunities for addressing “small n, large p” problems across diverse domains.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"199 ","pages":"Pages 149-155"},"PeriodicalIF":3.3,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145520477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cross-scale coupled attention network for underwater image enhancement 水下图像增强的跨尺度耦合注意网络
IF 3.3 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-01 Epub Date: 2025-11-05 DOI: 10.1016/j.patrec.2025.11.001
Gaoli Zhao , Yuheng Wu , Kefei Zhang , Song Han , Wenyi Zhao , Weidong Zhang
Underwater images often suffer from significant degradation due to light attenuation and scattering in water. To address this challenge, an underwater image enhancement network based on a cross-scale coupled attention mechanism is proposed in this letter. The network contains a spatial feature stratification module comprising four feature extraction branches with distinct receptive fields and a global normalization branch for comprehensively capturing multiscale hierarchical textures. Furthermore, a coupled attention module is designed to adaptively model saliency relationships across different scales and spatial positions, guiding the multibranch feature process fusion and enhancing the collaborative representations of global and local information. To achieve enhanced perceptual consistency and detail fidelity during fusion, a perceptual loss based on the VGG-19 network is introduced to supervise the effective reconstruction of multiscale features. We evaluated the method across several public underwater image datasets, and the results consistently highlighted its strength from both subjective and objective perspectives.
水下图像往往遭受严重的退化,由于光的衰减和散射在水中。为了解决这一挑战,本文提出了一种基于跨尺度耦合注意机制的水下图像增强网络。该网络包含一个空间特征分层模块,该模块由四个具有不同接受域的特征提取分支和一个用于全面捕获多尺度分层纹理的全局归一化分支组成。此外,设计了一个耦合关注模块,对不同尺度和空间位置的显著性关系进行自适应建模,引导多分支特征过程融合,增强全局和局部信息的协同表示。为了增强融合过程中的感知一致性和细节保真度,引入了基于VGG-19网络的感知损失来监督多尺度特征的有效重建。我们在几个公开的水下图像数据集上对该方法进行了评估,结果从主观和客观的角度都一致突出了该方法的优势。
{"title":"Cross-scale coupled attention network for underwater image enhancement","authors":"Gaoli Zhao ,&nbsp;Yuheng Wu ,&nbsp;Kefei Zhang ,&nbsp;Song Han ,&nbsp;Wenyi Zhao ,&nbsp;Weidong Zhang","doi":"10.1016/j.patrec.2025.11.001","DOIUrl":"10.1016/j.patrec.2025.11.001","url":null,"abstract":"<div><div>Underwater images often suffer from significant degradation due to light attenuation and scattering in water. To address this challenge, an underwater image enhancement network based on a cross-scale coupled attention mechanism is proposed in this letter. The network contains a spatial feature stratification module comprising four feature extraction branches with distinct receptive fields and a global normalization branch for comprehensively capturing multiscale hierarchical textures. Furthermore, a coupled attention module is designed to adaptively model saliency relationships across different scales and spatial positions, guiding the multibranch feature process fusion and enhancing the collaborative representations of global and local information. To achieve enhanced perceptual consistency and detail fidelity during fusion, a perceptual loss based on the VGG-19 network is introduced to supervise the effective reconstruction of multiscale features. We evaluated the method across several public underwater image datasets, and the results consistently highlighted its strength from both subjective and objective perspectives.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"199 ","pages":"Pages 61-67"},"PeriodicalIF":3.3,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145520475","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Lightweight adaptive spatiotemporal information fusion network for medical time series classification 用于医学时间序列分类的轻量级自适应时空信息融合网络
IF 3.3 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-01 Epub Date: 2025-11-10 DOI: 10.1016/j.patrec.2025.11.021
Fan Yang , Anping Zeng , Chunlin He , Chaorong Li , Xingjie Wang , Shijie Xu
Medical time series (MedTS) data, such as electroencephalography (EEG) and electrocardiography (ECG), play a crucial role in monitoring physiological signals and diagnosing neurological and cardiovascular conditions. While deep learning methods have achieved notable success in general time series classification, they often struggle to effectively capture the unique spatiotemporal dependencies inherent in clinical MedTS data. Additionally, their high computational complexity and lack of interpretability hinder practical deployment in healthcare settings. To address these challenges, we propose ASTIFNet, a lightweight Adaptive SpatioTemporal Information Fusion Network. The framework first employs a cross-channel fusion mechanism and multi-granularity feature extraction to hierarchically model spatiotemporal patterns. Next, a variance-based attention module is incorporated to dynamically focus on clinically relevant features while minimizing computational overhead. Finally, the model preserves the original time-series structure through feature-map-based processing, enabling transparent decision-making with post-hoc interpretability. Experiments on four public benchmarks demonstrate that ASTIFNet matches state-of-the-art performance while requiring fewer than 10KB of parameters.
医学时间序列(MedTS)数据,如脑电图(EEG)和心电图(ECG),在监测生理信号和诊断神经和心血管疾病方面发挥着至关重要的作用。虽然深度学习方法在一般时间序列分类方面取得了显著的成功,但它们往往难以有效地捕获临床MedTS数据中固有的独特时空依赖性。此外,它们的高计算复杂性和缺乏可解释性阻碍了在医疗保健环境中的实际部署。为了应对这些挑战,我们提出了一种轻量级的自适应时空信息融合网络ASTIFNet。该框架首先采用跨通道融合机制和多粒度特征提取对时空模式进行分层建模;接下来,结合基于方差的注意力模块来动态关注临床相关特征,同时最小化计算开销。最后,该模型通过基于特征图的处理保留了原始时间序列结构,实现了具有事后可解释性的透明决策。在四个公共基准测试上的实验表明,ASTIFNet在需要少于10KB的参数的情况下达到了最先进的性能。
{"title":"Lightweight adaptive spatiotemporal information fusion network for medical time series classification","authors":"Fan Yang ,&nbsp;Anping Zeng ,&nbsp;Chunlin He ,&nbsp;Chaorong Li ,&nbsp;Xingjie Wang ,&nbsp;Shijie Xu","doi":"10.1016/j.patrec.2025.11.021","DOIUrl":"10.1016/j.patrec.2025.11.021","url":null,"abstract":"<div><div>Medical time series (MedTS) data, such as electroencephalography (EEG) and electrocardiography (ECG), play a crucial role in monitoring physiological signals and diagnosing neurological and cardiovascular conditions. While deep learning methods have achieved notable success in general time series classification, they often struggle to effectively capture the unique spatiotemporal dependencies inherent in clinical MedTS data. Additionally, their high computational complexity and lack of interpretability hinder practical deployment in healthcare settings. To address these challenges, we propose ASTIFNet, a lightweight Adaptive SpatioTemporal Information Fusion Network. The framework first employs a cross-channel fusion mechanism and multi-granularity feature extraction to hierarchically model spatiotemporal patterns. Next, a variance-based attention module is incorporated to dynamically focus on clinically relevant features while minimizing computational overhead. Finally, the model preserves the original time-series structure through feature-map-based processing, enabling transparent decision-making with post-hoc interpretability. Experiments on four public benchmarks demonstrate that ASTIFNet matches state-of-the-art performance while requiring fewer than 10KB of parameters.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"199 ","pages":"Pages 75-81"},"PeriodicalIF":3.3,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145520492","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comprehending C codes with LLMs: Effective comment generation through retrieval and reasoning 用llm理解C代码:通过检索和推理有效地生成注释
IF 3.3 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-01 Epub Date: 2025-10-09 DOI: 10.1016/j.patrec.2025.10.007
Srijoni Majumdar , Adwita Deshpande , Partha Pratim Das , Partha Pratim Chakrabarti
Software maintenance requires substantial time for program comprehension. Code comments significantly improve understandability by providing a glass-box view of the code and are thus essential for maintainability. Prior work has analyzed comment attributes, built automated systems to detect irrelevant comments, and applied machine learning to generate meaningful comments. With the rise of large language models, comment generation has accelerated, particularly for Java and Python. In this paper, we present a first-of-its-kind framework for code comment generation in C, a language widely used in low-level tasks. We explore the effectiveness of few-shot learning, retrieval-augmented generation, and code structure based context modeling. Our work builds on prior field studies conducted across seven companies in India and the UK, resulting in a dataset of 20,206 human-annotated C comments rated for usefulness. By 2024, contributions from 40 academic teams and 50 hackathon groups expanded this dataset to 24,578 comments. We further introduce a reusable evaluation framework involving human experts and large language model evaluators, grounded in eight dimensions derived from four industry case studies. A subset of 11,797 comments has been annotated for the presence or absence of these dimensions, serving as both input for generation and evaluation. Our results show that GPT-4o mini-trained models produce comments most aligned with human-annotated ones, achieving a similarity score of 0.64, followed by Gemini 1.5 at 0.58. GPT-4.5 achieves the highest alignment with humans as an evaluator, while Llama-3.1-70b performs the lowest.
软件维护需要大量的时间来理解程序。代码注释通过提供代码的玻璃盒视图显著地提高了可理解性,因此对可维护性至关重要。之前的工作分析了评论属性,构建了自动化系统来检测无关的评论,并应用机器学习来生成有意义的评论。随着大型语言模型的兴起,注释生成速度加快,特别是对于Java和Python。在本文中,我们提出了一个在C语言(一种广泛用于低级任务的语言)中生成代码注释的首个框架。我们探讨了基于上下文建模的少镜头学习、检索增强生成和代码结构的有效性。我们的工作建立在之前在印度和英国的七家公司进行的实地研究的基础上,得出了一个由20,206条人工注释的C评论组成的数据集,这些评论被评为有用性。到2024年,来自40个学术团队和50个黑客马拉松组织的贡献将这个数据集扩展到24,578条评论。我们进一步介绍了一个可重用的评估框架,涉及人类专家和大型语言模型评估者,该框架基于来自四个行业案例研究的八个维度。对于这些维度的存在与否,已经注释了11,797条评论的子集,作为生成和评估的输入。我们的研究结果表明,gpt - 40迷你训练模型产生的评论与人类注释的评论最一致,达到0.64的相似性得分,其次是Gemini 1.5,为0.58。作为评估者,GPT-4.5与人类的契合度最高,而lama-3.1-70b的契合度最低。
{"title":"Comprehending C codes with LLMs: Effective comment generation through retrieval and reasoning","authors":"Srijoni Majumdar ,&nbsp;Adwita Deshpande ,&nbsp;Partha Pratim Das ,&nbsp;Partha Pratim Chakrabarti","doi":"10.1016/j.patrec.2025.10.007","DOIUrl":"10.1016/j.patrec.2025.10.007","url":null,"abstract":"<div><div>Software maintenance requires substantial time for program comprehension. Code comments significantly improve understandability by providing a glass-box view of the code and are thus essential for maintainability. Prior work has analyzed comment attributes, built automated systems to detect irrelevant comments, and applied machine learning to generate meaningful comments. With the rise of large language models, comment generation has accelerated, particularly for Java and Python. In this paper, we present a first-of-its-kind framework for code comment generation in C, a language widely used in low-level tasks. We explore the effectiveness of few-shot learning, retrieval-augmented generation, and code structure based context modeling. Our work builds on prior field studies conducted across seven companies in India and the UK, resulting in a dataset of 20,206 human-annotated C comments rated for usefulness. By 2024, contributions from 40 academic teams and 50 hackathon groups expanded this dataset to 24,578 comments. We further introduce a reusable evaluation framework involving human experts and large language model evaluators, grounded in eight dimensions derived from four industry case studies. A subset of 11,797 comments has been annotated for the presence or absence of these dimensions, serving as both input for generation and evaluation. Our results show that <span>GPT-4o mini</span>-trained models produce comments most aligned with human-annotated ones, achieving a similarity score of 0.64, followed by <span>Gemini 1.5</span> at 0.58. <span>GPT-4.5</span> achieves the highest alignment with humans as an evaluator, while <span>Llama-3.1-70b</span> performs the lowest.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"199 ","pages":"Pages 295-302"},"PeriodicalIF":3.3,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145736500","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Editorial: Special Section Forum for Information Retrieval Evaluation (FIRE) 2024 第二十四届信息检索评价专题论坛
IF 3.3 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-01 Epub Date: 2025-10-27 DOI: 10.1016/j.patrec.2025.10.012
Thomas Mandl , Prasenjit Majumder
The Forum for Information Retrieval Evaluation (FIRE) is an evaluation initiative focused on resources for languages of India. FIRE 2024 was the 16th edition and comprised 10 evaluation tracks which ran as shared tasks. Three contributions showcase the research contributed to these evaluation tracks of the FIRE conference. The first contribution is related spoken language retrieval for six languages of India. The second paper deals with source code retrieval for the language C and employs LLMs for the task of generating good comments. The third contribution discusses performance patterns of classifiers for hate speech collections for languages of India. Furthermore, connections to the Pattern Recognition community are discussed.
信息检索评价论坛(FIRE)是一个以印度语言资源为重点的评价倡议。FIRE 2024是第16版,包含10个评估轨道,作为共享任务运行。三篇文章展示了为FIRE会议的这些评估轨道所做的研究。第一个贡献是有关印度六种语言的口语检索。第二篇论文涉及C语言的源代码检索,并使用llm来生成好的注释。第三个贡献讨论了印度语言的仇恨言论集合的分类器的性能模式。此外,还讨论了与模式识别社区的联系。
{"title":"Editorial: Special Section Forum for Information Retrieval Evaluation (FIRE) 2024","authors":"Thomas Mandl ,&nbsp;Prasenjit Majumder","doi":"10.1016/j.patrec.2025.10.012","DOIUrl":"10.1016/j.patrec.2025.10.012","url":null,"abstract":"<div><div>The Forum for Information Retrieval Evaluation (FIRE) is an evaluation initiative focused on resources for languages of India. FIRE 2024 was the 16th edition and comprised 10 evaluation tracks which ran as shared tasks. Three contributions showcase the research contributed to these evaluation tracks of the FIRE conference. The first contribution is related spoken language retrieval for six languages of India. The second paper deals with source code retrieval for the language C and employs LLMs for the task of generating good comments. The third contribution discusses performance patterns of classifiers for hate speech collections for languages of India. Furthermore, connections to the Pattern Recognition community are discussed.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"199 ","pages":"Pages 285-287"},"PeriodicalIF":3.3,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145736629","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Assessing demographic bias in brain age prediction models using multiple deep learning paradigms 使用多种深度学习范式评估脑年龄预测模型中的人口统计学偏差
IF 3.3 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-01 Epub Date: 2025-11-25 DOI: 10.1016/j.patrec.2025.11.029
Michela Gravina , Giuseppe Pontillo , Zeena Shawa , James H. Cole , Carlo Sansone
Predicting brain age from structural Magnetic Resonance Imaging (MRI) has emerged as a critical task at the intersection of medical imaging and Artificial Intelligence, with Deep learning (DL) models achieving state-of-the-art performance. However, despite their predictive power, such models remain susceptible to algorithmic bias, especially when applied to populations whose demographic characteristics differ from those seen during training. In this paper, we investigate how demographic factors influence the performance of brain age prediction models. We leverage a large, demographically diverse MRI dataset including 7480 healthy subjects (3599 female and 3881 male) spanning three major racial groups: White, Black, and Asian. To explore the effects of data composition and model architecture on generalization, we design and compare multiple training paradigms, including models trained on single group and a Multi-Input architecture that explicitly incorporates demographic metadata. Results on an external test set including 3194 subjects (2162 White, 694 Black, and 338 Asian) reveal evidence of demographic bias, with the Multi-Input model achieving the most balanced performance across groups (mean absolute error: 2.94 ± 0.07 for White, 2.91 ± 0.16 for Black, and 3.34 ± 0.17 for Asian subjects). These findings highlight the need for fairness-aware approaches, advocating for strategies that mitigate bias, and enhance generalizability.
通过结构磁共振成像(MRI)预测大脑年龄已经成为医学成像和人工智能交叉领域的一项关键任务,深度学习(DL)模型实现了最先进的性能。然而,尽管这些模型具有预测能力,但它们仍然容易受到算法偏差的影响,特别是在应用于人口统计学特征与训练期间所看到的不同的人群时。在本文中,我们研究人口因素如何影响脑年龄预测模型的性能。我们利用了一个庞大的、人口统计学上多样化的MRI数据集,包括7480名健康受试者(3599名女性和3881名男性),涵盖三个主要种族:白人、黑人和亚洲人。为了探索数据组成和模型架构对泛化的影响,我们设计并比较了多个训练范式,包括单组训练的模型和明确包含人口统计元数据的多输入架构。包括3194名受试者(2162名白人、694名黑人和338名亚洲人)在内的外部测试集的结果显示了人口统计学偏差的证据,多输入模型在各组间的表现最为平衡(白人受试者的平均绝对误差为2.94±0.07,黑人受试者的平均绝对误差为2.91±0.16,亚洲受试者的平均绝对误差为3.34±0.17)。这些发现强调了对公平意识方法的需求,倡导减轻偏见和提高普遍性的策略。
{"title":"Assessing demographic bias in brain age prediction models using multiple deep learning paradigms","authors":"Michela Gravina ,&nbsp;Giuseppe Pontillo ,&nbsp;Zeena Shawa ,&nbsp;James H. Cole ,&nbsp;Carlo Sansone","doi":"10.1016/j.patrec.2025.11.029","DOIUrl":"10.1016/j.patrec.2025.11.029","url":null,"abstract":"<div><div>Predicting brain age from structural Magnetic Resonance Imaging (MRI) has emerged as a critical task at the intersection of medical imaging and Artificial Intelligence, with Deep learning (DL) models achieving state-of-the-art performance. However, despite their predictive power, such models remain susceptible to algorithmic bias, especially when applied to populations whose demographic characteristics differ from those seen during training. In this paper, we investigate how demographic factors influence the performance of brain age prediction models. We leverage a large, demographically diverse MRI dataset including 7480 healthy subjects (3599 female and 3881 male) spanning three major racial groups: White, Black, and Asian. To explore the effects of data composition and model architecture on generalization, we design and compare multiple training paradigms, including models trained on single group and a Multi-Input architecture that explicitly incorporates demographic metadata. Results on an external test set including 3194 subjects (2162 White, 694 Black, and 338 Asian) reveal evidence of demographic bias, with the Multi-Input model achieving the most balanced performance across groups (mean absolute error: 2.94 ± 0.07 for White, 2.91 ± 0.16 for Black, and 3.34 ± 0.17 for Asian subjects). These findings highlight the need for fairness-aware approaches, advocating for strategies that mitigate bias, and enhance generalizability.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"199 ","pages":"Pages 246-253"},"PeriodicalIF":3.3,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145617786","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep learning and multi-modal MRI for the segmentation of sub-acute and chronic stroke lesions 深度学习和多模态MRI对亚急性和慢性脑卒中病变的分割
IF 3.3 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-01 Epub Date: 2025-11-14 DOI: 10.1016/j.patrec.2025.11.017
Alessandro Di Matteo , Youwan Mahé , Stéphanie Leplaideur , Isabelle Bonan , Elise Bannier , Francesca Galassi
Stroke is a leading cause of morbidity and mortality worldwide. Accurate segmentation of post-stroke lesions on MRI is crucial for assessing brain damage and informing rehabilitation. Manual segmentation, however, is time-consuming and prone to error, motivating the development of automated approaches. This study investigates how deep learning with multimodal MRI can improve automated lesion segmentation in sub-acute and chronic stroke. A single-modality baseline was trained on the public ATLAS v2.0 dataset (655 T1-w scans) using the nnU-Net v2 framework and evaluated on an independent clinical cohort (45 patients with paired T1-w and FLAIR MRI). On this internal dataset, we conducted a systematic ablation comparing (i) direct transfer of the ATLAS baseline, (ii) fine-tuning using T1-w only, and (iii) fusion of T1-w and FLAIR inputs through early, mid, and late fusion strategies, each tested with metric averaging and ensembling.
The ATLAS baseline model achieved a mean Dice score of 0.64 and a lesion-wise F1 score of 0.67. On the clinical dataset, ensembling improved performance (Dice 0.70 vs. 0.68; F1 0.79 vs. 0.73), while fine-tuning on T1-w data further increased accuracy (Dice 0.72; F1 0.78). The best overall results were obtained with a T1+FLAIR late-fusion ensemble (Dice 0.75; F1 0.80; Average Surface Distance (ASD) 2.94 mm), with statistically significant improvements, especially for small and medium lesions.
These results show that fine-tuning and multimodal fusion — particularly late fusion — improve generalization for post-stroke lesion segmentation, supporting robust, reproducible quantification in clinical settings.
中风是全世界发病率和死亡率的主要原因。脑卒中后MRI病变的准确分割对于评估脑损伤和告知康复至关重要。然而,手工分割既耗时又容易出错,这促使了自动化方法的发展。本研究探讨了多模态MRI的深度学习如何改善亚急性和慢性中风的自动病灶分割。使用nnU-Net v2框架在公共ATLAS v2.0数据集(655个T1-w扫描)上训练单模态基线,并在独立临床队列(45例配对T1-w和FLAIR MRI患者)中进行评估。在这个内部数据集上,我们进行了系统的消融比较(i) ATLAS基线的直接转移,(ii)仅使用T1-w进行微调,以及(iii)通过早期、中期和后期融合策略融合T1-w和FLAIR输入,每种策略都使用度量平均和集合进行测试。ATLAS基线模型的平均Dice评分为0.64,逐病变F1评分为0.67。在临床数据集上,集成提高了性能(Dice 0.70 vs. 0.68; F1 0.79 vs. 0.73),而在T1-w数据上的微调进一步提高了准确性(Dice 0.72; F1 0.78)。T1+FLAIR晚期融合整体效果最好(Dice 0.75; F1 0.80;平均表面距离(ASD) 2.94 mm),具有统计学上显著的改善,特别是对于中小型病变。这些结果表明,微调和多模态融合-特别是后期融合-提高了脑卒中后病变分割的泛化,支持临床环境中稳健、可重复的量化。
{"title":"Deep learning and multi-modal MRI for the segmentation of sub-acute and chronic stroke lesions","authors":"Alessandro Di Matteo ,&nbsp;Youwan Mahé ,&nbsp;Stéphanie Leplaideur ,&nbsp;Isabelle Bonan ,&nbsp;Elise Bannier ,&nbsp;Francesca Galassi","doi":"10.1016/j.patrec.2025.11.017","DOIUrl":"10.1016/j.patrec.2025.11.017","url":null,"abstract":"<div><div>Stroke is a leading cause of morbidity and mortality worldwide. Accurate segmentation of post-stroke lesions on MRI is crucial for assessing brain damage and informing rehabilitation. Manual segmentation, however, is time-consuming and prone to error, motivating the development of automated approaches. This study investigates how deep learning with multimodal MRI can improve automated lesion segmentation in sub-acute and chronic stroke. A single-modality baseline was trained on the public ATLAS v2.0 dataset (655 T1-w scans) using the nnU-Net v2 framework and evaluated on an independent clinical cohort (45 patients with paired T1-w and FLAIR MRI). On this internal dataset, we conducted a systematic ablation comparing (i) direct transfer of the ATLAS baseline, (ii) fine-tuning using T1-w only, and (iii) fusion of T1-w and FLAIR inputs through early, mid, and late fusion strategies, each tested with metric averaging and ensembling.</div><div>The ATLAS baseline model achieved a mean Dice score of 0.64 and a lesion-wise F1 score of 0.67. On the clinical dataset, ensembling improved performance (Dice 0.70 vs. 0.68; F1 0.79 vs. 0.73), while fine-tuning on T1-w data further increased accuracy (Dice 0.72; F1 0.78). The best overall results were obtained with a T1+FLAIR late-fusion ensemble (Dice 0.75; F1 0.80; Average Surface Distance (ASD) 2.94 mm), with statistically significant improvements, especially for small and medium lesions.</div><div>These results show that fine-tuning and multimodal fusion — particularly late fusion — improve generalization for post-stroke lesion segmentation, supporting robust, reproducible quantification in clinical settings.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"199 ","pages":"Pages 225-231"},"PeriodicalIF":3.3,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145617713","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Pattern Recognition Letters
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1