首页 > 最新文献

Pattern Recognition Letters最新文献

英文 中文
Hierarchical memory-enhanced networks for student knowledge tracing 学生知识追踪的分层记忆增强网络
IF 3.3 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-03-01 Epub Date: 2026-01-03 DOI: 10.1016/j.patrec.2026.01.002
Huali Yang , Junjie Hu , Tao Huang , Shengze Hu , Wang Gao , Zhuoran Xu , Jing Geng
Accurate recognition of students’ knowledge states is critical for personalized education in the field of intelligent education. Knowledge tracing (KT) has emerged as an important research domain for tracing students’ knowledge states using the analysis of learning trajectory data. However, existing KT methods tend to overlook the hierarchical nature of memory, resulting in incomplete memory transfer. To address this issue, this study proposes a novel hierarchical memory-enhanced knowledge tracing (HMEKT) method that models the hierarchical structure of memory. HMEKT consists of three modules: shallow memory, deep memory, and performance prediction. Specifically, in the shallow memory module, learning and forgetting mechanisms are used to simulate memory growth and decay, capturing the dynamic changes in knowledge states. In the deep memory module, a dynamic memory matrix is used to store the student’s core knowledge system, transferring shallow memory into deep memory through enhancement and reduction gates that control memory transfer. Finally, for predicting student performance, relevant knowledge states are aggregated from the knowledge system matrix for future questions. Experiments on four datasets demonstrate the effectiveness of the model, with a 1.99% AUC gain on Assistment2017 compared to state-of-the-art methods.
在智能教育领域,准确识别学生的知识状态是实现个性化教育的关键。知识追踪(Knowledge tracing, KT)是利用学习轨迹数据分析来追踪学生知识状态的一个重要研究领域。然而,现有的KT方法往往忽略了内存的层次性,导致内存传输不完全。为了解决这一问题,本研究提出了一种新的分层记忆增强知识追踪(HMEKT)方法,该方法对记忆的分层结构进行建模。HMEKT由三个模块组成:浅内存、深内存和性能预测。具体而言,在浅记忆模块中,使用学习和遗忘机制来模拟记忆的增长和衰退,捕捉知识状态的动态变化。在深度记忆模块中,使用动态记忆矩阵来存储学生的核心知识系统,通过控制记忆传递的增强门和还原门将浅记忆传递到深度记忆中。最后,为了预测学生的表现,从知识系统矩阵中汇总相关的知识状态,以备将来的问题。在四个数据集上的实验证明了该模型的有效性,与最先进的方法相比,Assistment2017上的AUC增益为1.99%。
{"title":"Hierarchical memory-enhanced networks for student knowledge tracing","authors":"Huali Yang ,&nbsp;Junjie Hu ,&nbsp;Tao Huang ,&nbsp;Shengze Hu ,&nbsp;Wang Gao ,&nbsp;Zhuoran Xu ,&nbsp;Jing Geng","doi":"10.1016/j.patrec.2026.01.002","DOIUrl":"10.1016/j.patrec.2026.01.002","url":null,"abstract":"<div><div>Accurate recognition of students’ knowledge states is critical for personalized education in the field of intelligent education. Knowledge tracing (KT) has emerged as an important research domain for tracing students’ knowledge states using the analysis of learning trajectory data. However, existing KT methods tend to overlook the hierarchical nature of memory, resulting in incomplete memory transfer. To address this issue, this study proposes a novel hierarchical memory-enhanced knowledge tracing (HMEKT) method that models the hierarchical structure of memory. HMEKT consists of three modules: shallow memory, deep memory, and performance prediction. Specifically, in the shallow memory module, learning and forgetting mechanisms are used to simulate memory growth and decay, capturing the dynamic changes in knowledge states. In the deep memory module, a dynamic memory matrix is used to store the student’s core knowledge system, transferring shallow memory into deep memory through enhancement and reduction gates that control memory transfer. Finally, for predicting student performance, relevant knowledge states are aggregated from the knowledge system matrix for future questions. Experiments on four datasets demonstrate the effectiveness of the model, with a 1.99% AUC gain on Assistment2017 compared to state-of-the-art methods.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"201 ","pages":"Pages 37-44"},"PeriodicalIF":3.3,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145940476","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Entropy calibrated prototype embedding for transductive few-shot learning 基于熵校正的换能化短时学习原型嵌入
IF 3.3 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-03-01 Epub Date: 2026-01-13 DOI: 10.1016/j.patrec.2026.01.015
Mengfei Guo , Jiahui Wang , Qin Xu , Bo Jiang , Bin Luo
Transductive Few-shot learning aims to generalize to new classes from limited labeled support and all unlabeled query samples. Widely adopted paradigms including prototypical networks and graph-based label propagation. The former classifying queries based on distances to class prototypes, while the latter propagates the labels based on support samples. However, Existing methods typically treat all samples with equal importance, neglect inherent reliabilities, and underutilize prototypes merely as static anchors. This paper proposes Entropy Calibrated Prototype Embedding (ECPE), a novel framework that not only integrates prototypical networks and label propagation methods but also addresses their respective limitations through an iterative refinement strategy. Firstly, we propose the Entropy Calibration (EC), which dynamically assesses sample reliability using prediction entropy to weigh their influence in label propagation. Secondly, Entropy-aware Prototype Embedding (EPE) we proposed treats prototypes as evolving synthetic nodes, iteratively updating them based on calibrated predictions and embedding high-certainty prototypes into the graph.With the iteration of label calibration, entropy-aware prototype embedding, and label propagation, the proposed ECPE enhances classification accuracy and robustness. Extensive experiments demonstrate that ECPE surpasses state-of-the-art performance on three standard Transductive FSL benchmarks. Our source code is published at: https://github.com/gmf-ahu/ECPE.
换向的少次学习旨在从有限的标记支持和所有未标记的查询样本中泛化到新的类。广泛采用的范例包括原型网络和基于图的标签传播。前者基于到类原型的距离对查询进行分类,而后者基于支持样本传播标签。然而,现有的方法通常对所有样本都同等重要,忽视了固有的可靠性,并且将原型仅仅作为静态锚点加以充分利用。本文提出了熵校正原型嵌入(ECPE)框架,该框架不仅集成了原型网络和标签传播方法,而且通过迭代改进策略解决了它们各自的局限性。首先,我们提出了熵校正(Entropy Calibration, EC)方法,该方法利用预测熵来衡量样本在标签传播过程中的影响,从而动态评估样本的可靠性。其次,我们提出的熵感知原型嵌入(EPE)将原型视为不断进化的合成节点,基于校准的预测迭代更新它们,并将高确定性原型嵌入到图中。通过标签校准、熵感知原型嵌入和标签传播的迭代,该方法提高了分类精度和鲁棒性。广泛的实验表明,ECPE在三个标准的换能器FSL基准测试中超过了最先进的性能。我们的源代码发布在:https://github.com/gmf-ahu/ECPE。
{"title":"Entropy calibrated prototype embedding for transductive few-shot learning","authors":"Mengfei Guo ,&nbsp;Jiahui Wang ,&nbsp;Qin Xu ,&nbsp;Bo Jiang ,&nbsp;Bin Luo","doi":"10.1016/j.patrec.2026.01.015","DOIUrl":"10.1016/j.patrec.2026.01.015","url":null,"abstract":"<div><div>Transductive Few-shot learning aims to generalize to new classes from limited labeled support and all unlabeled query samples. Widely adopted paradigms including prototypical networks and graph-based label propagation. The former classifying queries based on distances to class prototypes, while the latter propagates the labels based on support samples. However, Existing methods typically treat all samples with equal importance, neglect inherent reliabilities, and underutilize prototypes merely as static anchors. This paper proposes Entropy Calibrated Prototype Embedding (ECPE), a novel framework that not only integrates prototypical networks and label propagation methods but also addresses their respective limitations through an iterative refinement strategy. Firstly, we propose the Entropy Calibration (EC), which dynamically assesses sample reliability using prediction entropy to weigh their influence in label propagation. Secondly, Entropy-aware Prototype Embedding (EPE) we proposed treats prototypes as evolving synthetic nodes, iteratively updating them based on calibrated predictions and embedding high-certainty prototypes into the graph.With the iteration of label calibration, entropy-aware prototype embedding, and label propagation, the proposed ECPE enhances classification accuracy and robustness. Extensive experiments demonstrate that ECPE surpasses state-of-the-art performance on three standard Transductive FSL benchmarks. Our source code is published at: <span><span>https://github.com/gmf-ahu/ECPE</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"201 ","pages":"Pages 138-144"},"PeriodicalIF":3.3,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146038484","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AMDC: Attenuation map-guided dual-color space for underwater image color correction 用于水下图像色彩校正的衰减地图引导双色空间
IF 3.3 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-03-01 Epub Date: 2026-01-13 DOI: 10.1016/j.patrec.2026.01.005
Shilong Sun , Baiqiang Yu , Ling Zhou , Junpeng Xu , Wenyi Zhao , Weidong Zhang
Underwater images frequently exhibit color distortions due to wavelength-dependent light attenuation and absorption, further complicated by irregular lighting conditions underwater. Traditional color correction methods primarily target global light attenuation but are less effective in handling local color shifts caused by discontinuous depth variations and artificial illumination. To address this issue, we propose a dual-space adaptive color correction method guided by an attenuation map, referred to as AMDC. Specifically, we first utilize global attenuation compensation by leveraging the maximum reference channel of the image. Building on the globally compensated result, we then introduce a dual-space collaborative correction strategy. In RGB space, we perform local adaptive compensation using a weighted sliding window. In CIELab space, we restore color saturation through a zero-symmetric adaptive offset correction approach. To retain the most visually optimal color features, we selectively fuse the a and b channels from the two correction results, producing a locally corrected image. Finally, we utilize the maximum attenuation map of the raw image to guide the fusion of the locally corrected image with the raw, generating the final color-corrected output. Extensive qualitative and quantitative experiments demonstrate the effectiveness and robustness of our method for underwater image color correction.
由于波长相关的光衰减和吸收,水下图像经常表现出颜色失真,水下不规则的照明条件进一步复杂化。传统的色彩校正方法主要针对全局光衰减,但对于处理因深度变化不连续和人工照明引起的局部色彩偏移效果较差。为了解决这个问题,我们提出了一种由衰减图引导的双空间自适应色彩校正方法,称为AMDC。具体来说,我们首先利用图像的最大参考通道来利用全局衰减补偿。在全局补偿结果的基础上,提出了一种双空间协同校正策略。在RGB空间中,我们使用加权滑动窗口进行局部自适应补偿。在CIELab空间中,我们通过零对称自适应偏移校正方法恢复色彩饱和度。为了保留视觉上最优的颜色特征,我们有选择地融合两个校正结果中的a和b通道,产生局部校正图像。最后,我们利用原始图像的最大衰减图来指导局部校正图像与原始图像的融合,生成最终的颜色校正输出。大量的定性和定量实验证明了我们的方法对水下图像颜色校正的有效性和鲁棒性。
{"title":"AMDC: Attenuation map-guided dual-color space for underwater image color correction","authors":"Shilong Sun ,&nbsp;Baiqiang Yu ,&nbsp;Ling Zhou ,&nbsp;Junpeng Xu ,&nbsp;Wenyi Zhao ,&nbsp;Weidong Zhang","doi":"10.1016/j.patrec.2026.01.005","DOIUrl":"10.1016/j.patrec.2026.01.005","url":null,"abstract":"<div><div>Underwater images frequently exhibit color distortions due to wavelength-dependent light attenuation and absorption, further complicated by irregular lighting conditions underwater. Traditional color correction methods primarily target global light attenuation but are less effective in handling local color shifts caused by discontinuous depth variations and artificial illumination. To address this issue, we propose a dual-space adaptive color correction method guided by an attenuation map, referred to as AMDC. Specifically, we first utilize global attenuation compensation by leveraging the maximum reference channel of the image. Building on the globally compensated result, we then introduce a dual-space collaborative correction strategy. In RGB space, we perform local adaptive compensation using a weighted sliding window. In CIELab space, we restore color saturation through a zero-symmetric adaptive offset correction approach. To retain the most visually optimal color features, we selectively fuse the a and b channels from the two correction results, producing a locally corrected image. Finally, we utilize the maximum attenuation map of the raw image to guide the fusion of the locally corrected image with the raw, generating the final color-corrected output. Extensive qualitative and quantitative experiments demonstrate the effectiveness and robustness of our method for underwater image color correction.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"201 ","pages":"Pages 80-86"},"PeriodicalIF":3.3,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145979048","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Compressing model with few class-imbalance samples: An out-of-distribution expedition 类不平衡样本较少的压缩模型:一次分布外考察
IF 3.3 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-03-01 Epub Date: 2026-01-13 DOI: 10.1016/j.patrec.2026.01.010
Tian-Shuang Wu , Shen-Huan Lyu , Yanyan Wang , Ning Chen , Zhihao Qu , Baoliu Ye
Few-sample model compression aims to compress a large pre-trained model into a compact one using only a few samples. However, previous methods typically assume a balanced class distribution, which is costly under severe data scarcity. In the presence of imbalance, the compressed model exhibits significant performance degradation. We propose a novel framework named OOD-Enhanced Few-Sample Model Compression (OE-FSMC), introducing out-of-distribution (OOD) samples with dynamically assigned labels to prevent bias during the compression process. To avoid overfitting the OOD samples, we incorporate a joint distillation loss and a class-dependent regularization term. Extensive experiments on multiple benchmark datasets show that our framework can be seamlessly incorporated into existing few-sample model compression methods, effectively mitigating the accuracy degradation caused by class imbalance.
少样本模型压缩的目的是利用少量样本将一个大的预训练模型压缩成一个紧凑的模型。然而,以前的方法通常假设类分布平衡,这在严重的数据稀缺性下是昂贵的。在存在不平衡的情况下,压缩模型表现出明显的性能下降。我们提出了一个新的框架,称为OOD增强的少样本模型压缩(OE-FSMC),引入了带有动态分配标签的out- distribution (OOD)样本,以防止压缩过程中的偏差。为了避免OOD样本的过拟合,我们结合了一个联合蒸馏损失和一个类相关的正则化项。在多个基准数据集上的大量实验表明,我们的框架可以无缝地结合到现有的少样本模型压缩方法中,有效地缓解了类不平衡导致的精度下降。
{"title":"Compressing model with few class-imbalance samples: An out-of-distribution expedition","authors":"Tian-Shuang Wu ,&nbsp;Shen-Huan Lyu ,&nbsp;Yanyan Wang ,&nbsp;Ning Chen ,&nbsp;Zhihao Qu ,&nbsp;Baoliu Ye","doi":"10.1016/j.patrec.2026.01.010","DOIUrl":"10.1016/j.patrec.2026.01.010","url":null,"abstract":"<div><div>Few-sample model compression aims to compress a large pre-trained model into a compact one using only a few samples. However, previous methods typically assume a balanced class distribution, which is costly under severe data scarcity. In the presence of imbalance, the compressed model exhibits significant performance degradation. We propose a novel framework named OOD-Enhanced Few-Sample Model Compression (OE-FSMC), introducing out-of-distribution (OOD) samples with dynamically assigned labels to prevent bias during the compression process. To avoid overfitting the OOD samples, we incorporate a joint distillation loss and a class-dependent regularization term. Extensive experiments on multiple benchmark datasets show that our framework can be seamlessly incorporated into existing few-sample model compression methods, effectively mitigating the accuracy degradation caused by class imbalance.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"201 ","pages":"Pages 117-124"},"PeriodicalIF":3.3,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146038481","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MER-CAPF: Audio-text emotion recognition through cross-attention mechanism and multi-granularity pooling strategy MER-CAPF:基于交叉注意机制和多粒度池化策略的声文情感识别
IF 3.3 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-03-01 Epub Date: 2026-01-13 DOI: 10.1016/j.patrec.2026.01.008
Chengming Chen, Pengyuan Liu, Zhicheng Dong, Zhuo He, Zhijian Li
In the field of Human–Computer Interaction (HCI), emotion recognition is regarded as a critical yet challenging task due to its multimodal nature and limitations in data acquisition. To achieve accurate recognition of multimodal emotional information such as speech and text, this paper proposes a novel multimodal emotion recognition framework, MER-CAPF (Multimodal Emotion Recognition with Cross-Attention and Pooling Fusion). The framework employs a hierarchically frozen BERT model and a depthwise separable convolutional neural network (DSCNN) combined with a Bi-LSTM to extract features from the text and audio modalities, respectively. During the feature fusion stage, a multi-head cross-attention mechanism and multi-granularity pooling strategy are introduced to fully capture semantic and acoustic associations across modalities. In addition, the model incorporates parallel modality encoders with a progressive modality alignment mechanism to achieve synergistic alignment and deep interaction between speech and text features. Experiments conducted on three public benchmark datasets—IEMOCAP, MELD, and CMU-MOSEI demonstrate that MER-CAPF achieves accuracies of 74.73%, 63.26% and 67.38% on the IEMOCAP, MELD and CMU-MOSEI respectively, outperforming most existing methods and reaching a level comparable to recent state-of-the-art models, thereby validating the effectiveness and robustness of the proposed framework.
在人机交互(HCI)领域,情感识别由于其多模态特性和数据获取的局限性而被认为是一项关键而具有挑战性的任务。为了实现语音和文本等多模态情感信息的准确识别,本文提出了一种新的多模态情感识别框架MER-CAPF (multimodal emotion recognition with Cross-Attention and Pooling Fusion)。该框架采用层次冻结BERT模型和深度可分离卷积神经网络(DSCNN)结合Bi-LSTM分别从文本和音频模式中提取特征。在特征融合阶段,引入了多头交叉注意机制和多粒度池策略,以充分捕获跨模态的语义和声学关联。此外,该模型还结合了并行模态编码器和渐进式模态对齐机制,实现了语音和文本特征之间的协同对齐和深度交互。在IEMOCAP、MELD和CMU-MOSEI三个公共基准数据集上进行的实验表明,MER-CAPF在IEMOCAP、MELD和CMU-MOSEI上分别达到了74.73%、63.26%和67.38%的准确率,优于大多数现有方法,达到了与最近最先进模型相当的水平,从而验证了所提出框架的有效性和鲁棒性。
{"title":"MER-CAPF: Audio-text emotion recognition through cross-attention mechanism and multi-granularity pooling strategy","authors":"Chengming Chen,&nbsp;Pengyuan Liu,&nbsp;Zhicheng Dong,&nbsp;Zhuo He,&nbsp;Zhijian Li","doi":"10.1016/j.patrec.2026.01.008","DOIUrl":"10.1016/j.patrec.2026.01.008","url":null,"abstract":"<div><div>In the field of Human–Computer Interaction (HCI), emotion recognition is regarded as a critical yet challenging task due to its multimodal nature and limitations in data acquisition. To achieve accurate recognition of multimodal emotional information such as speech and text, this paper proposes a novel multimodal emotion recognition framework, MER-CAPF (Multimodal Emotion Recognition with Cross-Attention and Pooling Fusion). The framework employs a hierarchically frozen BERT model and a depthwise separable convolutional neural network (DSCNN) combined with a Bi-LSTM to extract features from the text and audio modalities, respectively. During the feature fusion stage, a multi-head cross-attention mechanism and multi-granularity pooling strategy are introduced to fully capture semantic and acoustic associations across modalities. In addition, the model incorporates parallel modality encoders with a progressive modality alignment mechanism to achieve synergistic alignment and deep interaction between speech and text features. Experiments conducted on three public benchmark datasets—IEMOCAP, MELD, and CMU-MOSEI demonstrate that MER-CAPF achieves accuracies of 74.73%, 63.26% and 67.38% on the IEMOCAP, MELD and CMU-MOSEI respectively, outperforming most existing methods and reaching a level comparable to recent state-of-the-art models, thereby validating the effectiveness and robustness of the proposed framework.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"201 ","pages":"Pages 125-131"},"PeriodicalIF":3.3,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146038482","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Wavelet-based diffusion transformer for image dehazing 基于小波的图像去雾扩散变换器
IF 3.3 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-03-01 Epub Date: 2026-01-10 DOI: 10.1016/j.patrec.2026.01.016
Cheng Ma , Guojun Liu , Jing Yue
In current image dehazing methods based on diffusion models, few studies explore and leverage the inherent prior knowledge of hazy images. Additionally, the inherent complexity of these models often results in difficulties during training, which in turn lead to poor restoration performance in dense hazy environments. To address these challenges, this paper proposes a dehazing diffusion model based on Haar wavelet priors, aiming to fully exploit the characteristic that haze information is concentrated in the low-frequency region. Specifically, the Haar wavelet transform is first applied to decompose the hazy image, and the diffusion model is used to generate low-frequency information in the image, thereby reconstructing the main colors and content of the dehazed image. Moreover, a high-frequency enhancement module based on Gabor is designed to extract high-frequency details through multi-directional Gabor convolution filters, further improving the fine-grained restoration capability of the image. Subsequently, a multi-scale pooling block is adopted to reduce blocky artifacts caused by non-uniform haze conditions, enhancing the visual consistency of the image. Finally, the effectiveness of the proposed method is demonstrated on publicly available datasets, and the model’s generalization ability is tested on real hazy image datasets, as well as its potential for application in other downstream tasks. The code is available at https://github.com/Mccc1003/WDiT_Dehaze-main.
在目前基于扩散模型的图像去雾方法中,很少有研究探索和利用模糊图像固有的先验知识。此外,这些模型固有的复杂性往往会给训练带来困难,从而导致在密集雾霾环境下的恢复性能较差。针对这些挑战,本文提出了一种基于Haar小波先验的消雾扩散模型,旨在充分利用雾霾信息集中在低频区域的特点。具体来说,首先利用Haar小波变换对模糊图像进行分解,然后利用扩散模型在图像中生成低频信息,从而重建去雾图像的主要颜色和内容。设计了基于Gabor的高频增强模块,通过多向Gabor卷积滤波器提取高频细节,进一步提高了图像的细粒度恢复能力。随后,采用多尺度池化块来减少雾霾条件不均匀造成的块伪影,增强图像的视觉一致性。最后,在公开可用的数据集上验证了该方法的有效性,并在真实模糊图像数据集上测试了模型的泛化能力,以及在其他下游任务中的应用潜力。代码可在https://github.com/Mccc1003/WDiT_Dehaze-main上获得。
{"title":"Wavelet-based diffusion transformer for image dehazing","authors":"Cheng Ma ,&nbsp;Guojun Liu ,&nbsp;Jing Yue","doi":"10.1016/j.patrec.2026.01.016","DOIUrl":"10.1016/j.patrec.2026.01.016","url":null,"abstract":"<div><div>In current image dehazing methods based on diffusion models, few studies explore and leverage the inherent prior knowledge of hazy images. Additionally, the inherent complexity of these models often results in difficulties during training, which in turn lead to poor restoration performance in dense hazy environments. To address these challenges, this paper proposes a dehazing diffusion model based on Haar wavelet priors, aiming to fully exploit the characteristic that haze information is concentrated in the low-frequency region. Specifically, the Haar wavelet transform is first applied to decompose the hazy image, and the diffusion model is used to generate low-frequency information in the image, thereby reconstructing the main colors and content of the dehazed image. Moreover, a high-frequency enhancement module based on Gabor is designed to extract high-frequency details through multi-directional Gabor convolution filters, further improving the fine-grained restoration capability of the image. Subsequently, a multi-scale pooling block is adopted to reduce blocky artifacts caused by non-uniform haze conditions, enhancing the visual consistency of the image. Finally, the effectiveness of the proposed method is demonstrated on publicly available datasets, and the model’s generalization ability is tested on real hazy image datasets, as well as its potential for application in other downstream tasks. The code is available at <span><span>https://github.com/Mccc1003/WDiT_Dehaze-main</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"201 ","pages":"Pages 58-65"},"PeriodicalIF":3.3,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145979046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MR-DETR: Miss reduction DETR with context frequency attention and adaptive query allocation strategy for small object detection MR-DETR:基于上下文频率关注和自适应查询分配策略的小目标检测的缺失减少DETR
IF 3.3 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-03-01 Epub Date: 2026-01-09 DOI: 10.1016/j.patrec.2026.01.004
Hailan Shen, Zihan Wang, Shuo Huang, Zailiang Chen
Small object detection is a critical task in computer vision, aiming to accurately detect tiny instances within images. Although DETR-based methods have improved general object detection, they often suffer from missed detections of small objects due to their limited size and indistinct features. Moreover, DETR-based methods employ a fixed number of queries, making it difficult to adapt to the dynamic variations of scenes. In this study, we propose Miss Reduction DETR (MR-DETR), which leverages Context Frequency Attention (CFA) and an Adaptive Query Allocation Strategy (AQAS) to reduce missed detections. First, to better capture fine details of small objects, CFA is designed with two complementary branches known as context and frequency. The former branch employs axial strip convolutions to capture global contextual information, while the latter branch uses a frequency modulation module to emphasize local high-frequency details. Next, AQAS is introducted, which applies feature excitation and compression to the encoder’s output maps, dynamically evaluates object density, and automatically adjusts the number of queries based on a density-to-query mapping, thereby improving adaptability in complex scenes and reducing missed detections. Experimental results demonstrate that MR-DETR achieves state-of-the-art detection performance on the aerial image datasets VisDrone and AI-TOD, which mainly contain small objects.
小目标检测是计算机视觉中的一项关键任务,旨在准确检测图像中的微小目标。尽管基于der的方法改进了一般的目标检测,但由于小目标的尺寸有限,特征不清晰,往往会漏检。此外,基于der的方法使用固定数量的查询,使其难以适应场景的动态变化。在本研究中,我们提出了Miss Reduction DETR (MR-DETR),它利用上下文频率注意(CFA)和自适应查询分配策略(AQAS)来减少遗漏检测。首先,为了更好地捕获小对象的精细细节,CFA设计了两个互补的分支,即上下文和频率。前一个分支采用轴向条带卷积来捕获全局上下文信息,而后一个分支使用调频模块来强调局部高频细节。接下来,介绍了AQAS算法,该算法对编码器的输出映射进行特征激励和压缩,动态评估对象密度,并基于密度-查询映射自动调整查询次数,从而提高了复杂场景下的适应性,减少了漏检。实验结果表明,MR-DETR在主要包含小目标的航空图像数据集VisDrone和AI-TOD上达到了最先进的检测性能。
{"title":"MR-DETR: Miss reduction DETR with context frequency attention and adaptive query allocation strategy for small object detection","authors":"Hailan Shen,&nbsp;Zihan Wang,&nbsp;Shuo Huang,&nbsp;Zailiang Chen","doi":"10.1016/j.patrec.2026.01.004","DOIUrl":"10.1016/j.patrec.2026.01.004","url":null,"abstract":"<div><div>Small object detection is a critical task in computer vision, aiming to accurately detect tiny instances within images. Although DETR-based methods have improved general object detection, they often suffer from missed detections of small objects due to their limited size and indistinct features. Moreover, DETR-based methods employ a fixed number of queries, making it difficult to adapt to the dynamic variations of scenes. In this study, we propose Miss Reduction DETR (MR-DETR), which leverages Context Frequency Attention (CFA) and an Adaptive Query Allocation Strategy (AQAS) to reduce missed detections. First, to better capture fine details of small objects, CFA is designed with two complementary branches known as context and frequency. The former branch employs axial strip convolutions to capture global contextual information, while the latter branch uses a frequency modulation module to emphasize local high-frequency details. Next, AQAS is introducted, which applies feature excitation and compression to the encoder’s output maps, dynamically evaluates object density, and automatically adjusts the number of queries based on a density-to-query mapping, thereby improving adaptability in complex scenes and reducing missed detections. Experimental results demonstrate that MR-DETR achieves state-of-the-art detection performance on the aerial image datasets VisDrone and AI-TOD, which mainly contain small objects.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"201 ","pages":"Pages 52-57"},"PeriodicalIF":3.3,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145979043","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Audio prompt driven reprogramming for diagnosing major depressive disorder 音频提示驱动的重编程诊断重度抑郁症
IF 3.3 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-03-01 Epub Date: 2025-12-24 DOI: 10.1016/j.patrec.2025.12.007
Hyunseo Kim, Longbin Jin, Eun Yi Kim
Diagnosing depression is critical due to its profound impact on individuals and associated risks. Although deep learning techniques like convolutional neural networks and transformers have been employed to detect depression, they require large, labeled datasets and substantial computational resources, posing challenges in data-scarce environments. We introduce p-DREAM (Prompt-Driven Reprogramming Exploiting Audio Mapping), a novel and data-efficient model designed to diagnose depression from speech data alone. The p-DREAM combines two main strategies: data augmentation and model reprogramming. First, it utilizes audio-specific data augmentation techniques to generate a richer set of training examples. Next, it employs audio prompts to aid in domain adaptation. These prompts guide a frozen pre-trained transformer, which extracts meaningful features. Finally, these features are fed into a lightweight classifier for prediction. The p-DREAM outperforms traditional fine-tuning and linear probing methods, while requiring only a small number of trainable parameters. Evaluations on three benchmark datasets (DAIC-WoZ, E-DAIC, and AVEC 2014) demonstrate consistent improvements. In particular, p-DREAM achieves a leading macro F1 score of 0.7734 using only acoustic features. We further conducted ablation studies on prompt length, position, and initialization, confirming their importance in effective model adaptation. p-DREAM offers a practical and privacy-conscious approach for speech-based depression assessment in low-resource environments. To promote reproducibility and community adoption, we plan to release our codebase in compliance with the ethical protocols outlined in the AVEC challenges.
诊断抑郁症至关重要,因为它对个人和相关风险的影响深远。虽然卷积神经网络和变压器等深度学习技术已被用于检测抑郁症,但它们需要大量的标记数据集和大量的计算资源,这在数据稀缺的环境中构成了挑战。我们介绍了p-DREAM(利用音频映射的提示驱动重编程),这是一个新颖的数据高效模型,旨在仅从语音数据诊断抑郁症。p-DREAM结合了两种主要策略:数据增强和模型重编程。首先,它利用特定于音频的数据增强技术来生成更丰富的训练示例集。接下来,它使用音频提示来帮助域适应。这些提示引导冻结的预训练变压器,提取有意义的特征。最后,将这些特征输入到一个轻量级分类器中进行预测。p-DREAM优于传统的微调和线性探测方法,同时只需要少量可训练的参数。对三个基准数据集(DAIC-WoZ, E-DAIC和AVEC 2014)的评估显示出一致的改进。特别是,p-DREAM仅使用声学特征就实现了领先的宏观F1得分0.7734。我们进一步对提示长度、位置和初始化进行了消融研究,证实了它们在有效的模型适应中的重要性。p-DREAM为低资源环境下基于语音的抑郁评估提供了一种实用且注重隐私的方法。为了促进可重复性和社区采用,我们计划按照AVEC挑战中概述的道德协议发布代码库。
{"title":"Audio prompt driven reprogramming for diagnosing major depressive disorder","authors":"Hyunseo Kim,&nbsp;Longbin Jin,&nbsp;Eun Yi Kim","doi":"10.1016/j.patrec.2025.12.007","DOIUrl":"10.1016/j.patrec.2025.12.007","url":null,"abstract":"<div><div>Diagnosing depression is critical due to its profound impact on individuals and associated risks. Although deep learning techniques like convolutional neural networks and transformers have been employed to detect depression, they require large, labeled datasets and substantial computational resources, posing challenges in data-scarce environments. We introduce p-DREAM (Prompt-Driven Reprogramming Exploiting Audio Mapping), a novel and data-efficient model designed to diagnose depression from speech data alone. The p-DREAM combines two main strategies: data augmentation and model reprogramming. First, it utilizes audio-specific data augmentation techniques to generate a richer set of training examples. Next, it employs audio prompts to aid in domain adaptation. These prompts guide a frozen pre-trained transformer, which extracts meaningful features. Finally, these features are fed into a lightweight classifier for prediction. The p-DREAM outperforms traditional fine-tuning and linear probing methods, while requiring only a small number of trainable parameters. Evaluations on three benchmark datasets (DAIC-WoZ, E-DAIC, and AVEC 2014) demonstrate consistent improvements. In particular, p-DREAM achieves a leading macro F1 score of 0.7734 using only acoustic features. We further conducted ablation studies on prompt length, position, and initialization, confirming their importance in effective model adaptation. p-DREAM offers a practical and privacy-conscious approach for speech-based depression assessment in low-resource environments. To promote reproducibility and community adoption, we plan to release our codebase in compliance with the ethical protocols outlined in the AVEC challenges.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"201 ","pages":"Pages 1-8"},"PeriodicalIF":3.3,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145886216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fine-tuning ImageNet-pretrained models in medical image classification: Reassessing the impact of different factors 微调imagenet预训练模型在医学图像分类中的应用:重新评估不同因素的影响
IF 3.3 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-03-01 Epub Date: 2026-01-16 DOI: 10.1016/j.patrec.2026.01.017
Juan Miguel Valverde , Vandad Imani , Jussi Tohka
Fine-tuning ImageNet-pretrained convolutional neural networks is a widely used strategy in medical image classification. Previous studies investigating the benefits of ImageNet pretraining over training from scratch have resulted in conflicting findings, likely due to lack of standardization in the experiments. Here, we identify various factors that were previously overlooked, and we propose a set of standardized experiments that account for these factors and that contribute to clarifying whether pretraining on ImageNet is truly advantageous. Our experiments revealed that dataset-independent factors (training set size, training time, and model size) cannot predict whether ImageNet pretraining will be beneficial. This is because the benefits of ImageNet pretraining depend on other, dataset and implementation specific, factors such as task difficulty and model architecture. We conclude that past demonstrations of the effectiveness of ImageNet pretraining are not universal, and that the potential advantages of ImageNet pretraining should be empirically evaluated in each scenario separately.
微调imagenet预训练卷积神经网络是一种广泛应用于医学图像分类的策略。先前的研究调查了ImageNet预训练相对于从头开始训练的好处,结果产生了相互矛盾的结果,可能是由于实验缺乏标准化。在这里,我们确定了以前被忽视的各种因素,我们提出了一套标准化的实验来解释这些因素,并有助于澄清在ImageNet上进行预训练是否真正有利。我们的实验表明,与数据集无关的因素(训练集大小、训练时间和模型大小)无法预测ImageNet预训练是否有益。这是因为ImageNet预训练的好处取决于其他数据集和实现特定的因素,如任务难度和模型架构。我们的结论是,过去对ImageNet预训练有效性的论证并不普遍,ImageNet预训练的潜在优势应该在每个场景中分别进行经验评估。
{"title":"Fine-tuning ImageNet-pretrained models in medical image classification: Reassessing the impact of different factors","authors":"Juan Miguel Valverde ,&nbsp;Vandad Imani ,&nbsp;Jussi Tohka","doi":"10.1016/j.patrec.2026.01.017","DOIUrl":"10.1016/j.patrec.2026.01.017","url":null,"abstract":"<div><div>Fine-tuning ImageNet-pretrained convolutional neural networks is a widely used strategy in medical image classification. Previous studies investigating the benefits of ImageNet pretraining over training from scratch have resulted in conflicting findings, likely due to lack of standardization in the experiments. Here, we identify various factors that were previously overlooked, and we propose a set of standardized experiments that account for these factors and that contribute to clarifying whether pretraining on ImageNet is truly advantageous. Our experiments revealed that dataset-independent factors (training set size, training time, and model size) cannot predict whether ImageNet pretraining will be beneficial. This is because the benefits of ImageNet pretraining depend on other, dataset and implementation specific, factors such as task difficulty and model architecture. We conclude that past demonstrations of the effectiveness of ImageNet pretraining are not universal, and that the potential advantages of ImageNet pretraining should be empirically evaluated in each scenario separately.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"201 ","pages":"Pages 132-137"},"PeriodicalIF":3.3,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146038483","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Frequency-selective countnet: Enhancing text-guided object counting with frequency features 频率选择计数:通过频率特征增强文本引导的对象计数
IF 3.3 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-03-01 Epub Date: 2025-12-27 DOI: 10.1016/j.patrec.2025.12.014
Cheng Qian , Jiwu Cao , Ying Mao , Ruotian Zhang , Fei Long , Jun Sang
Text-guided object counting aims to estimate the number of objects described by natural language within complex visual scenes. However, existing approaches often struggle to align textual intent with diverse visual patterns, especially when target objects vary in scale, appearance, or context.
To address these limitations, we propose Frequency-Selective CountNet (FSCNet), a novel framework that integrates spatial and frequency-domain features for precise text-guided counting. FSCNet introduces a Triple-Stream Attention Fusion Module (TSAFM) that combines textual, global, and local visual features. Additionally, an Adaptive Frequency Selector (AFS) dynamically emphasizes frequency components by separately modulating the magnitude and phase spectra, preserving geometric consistency during decoding.
Extensive experiments on the FSC-147 and CARPK datasets demonstrate that FSCNet achieves state-of-the-art performance, outperforming previous best methods by 18.34% in MAE and 27.41% in RMSE on FSC-147 (Avg.) and by 5.17%/7.58% on CARPK.
文本引导的对象计数旨在估计复杂视觉场景中自然语言描述的对象数量。然而,现有的方法往往难以将文本意图与不同的视觉模式结合起来,特别是当目标对象在规模、外观或上下文方面变化时。为了解决这些限制,我们提出了频率选择计数网(FSCNet),这是一个集成空间和频域特征的新框架,用于精确的文本引导计数。FSCNet引入了一个三流注意力融合模块(TSAFM),它结合了文本、全局和局部视觉特征。此外,自适应频率选择器(AFS)通过分别调制幅值和相位谱来动态强调频率成分,在解码过程中保持几何一致性。在FSC-147和CARPK数据集上的大量实验表明,FSCNet达到了最先进的性能,在FSC-147(平均)上,MAE和RMSE分别比以前的最佳方法高出18.34%和27.41%,在CARPK上分别高出5.17%和7.58%。
{"title":"Frequency-selective countnet: Enhancing text-guided object counting with frequency features","authors":"Cheng Qian ,&nbsp;Jiwu Cao ,&nbsp;Ying Mao ,&nbsp;Ruotian Zhang ,&nbsp;Fei Long ,&nbsp;Jun Sang","doi":"10.1016/j.patrec.2025.12.014","DOIUrl":"10.1016/j.patrec.2025.12.014","url":null,"abstract":"<div><div>Text-guided object counting aims to estimate the number of objects described by natural language within complex visual scenes. However, existing approaches often struggle to align textual intent with diverse visual patterns, especially when target objects vary in scale, appearance, or context.</div><div>To address these limitations, we propose Frequency-Selective CountNet (FSCNet), a novel framework that integrates spatial and frequency-domain features for precise text-guided counting. FSCNet introduces a Triple-Stream Attention Fusion Module (TSAFM) that combines textual, global, and local visual features. Additionally, an Adaptive Frequency Selector (AFS) dynamically emphasizes frequency components by separately modulating the magnitude and phase spectra, preserving geometric consistency during decoding.</div><div>Extensive experiments on the FSC-147 and CARPK datasets demonstrate that FSCNet achieves state-of-the-art performance, outperforming previous best methods by 18.34% in MAE and 27.41% in RMSE on FSC-147 (Avg.) and by 5.17%/7.58% on CARPK.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"201 ","pages":"Pages 15-21"},"PeriodicalIF":3.3,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145940474","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Pattern Recognition Letters
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1