首页 > 最新文献

Pattern Recognition Letters最新文献

英文 中文
Leveraging language to generalize natural images to few-shot medical image segmentation 利用语言对自然图像进行泛化,实现少镜头医学图像分割
IF 3.3 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-03-01 Epub Date: 2026-01-13 DOI: 10.1016/j.patrec.2026.01.009
Feifan Song , Yuntian Bo , Shidong Wang , Yang Long , Haofeng Zhang
Cross-domain Few-shot Medical Image Segmentation (CD-FSMIS) typically involves pre-training on a large-scale source domain dataset (e.g., natural image dataset) before transferring to a target domain with limited data for pixel-wise segmentation. However, due to the significant domain gap between natural images and medical images, existing Few-shot Segmentation (FSS) methods suffer from severe performance degradation in cross-domain scenarios. We observe that using only annotated masks as cross-domain cues is insufficient, while rich textual information can effectively establish knowledge relationships between visual instances and language descriptions, mitigating domain shift. To address this, we propose a plug-in Cross-domain Text-guided (CD-TG) module that leverages text-domain alignment to construct a new alignment space for domain generalization. This plug-in module consists of two components, including: (1) Text Generation Unit that utilizes the GPT-4 question-answering system to generate standardized category-level textual descriptions, and (2) Semantic-guided Unit that aligns visual features with textual embeddings while incorporating existing mask information. We integrate this plug-in module into five mainstream FSS methods and evaluate it on four widely used medical image datasets, and the experimental results demonstrate its effectiveness. Code is available at https://github.com/Lilacis/CD_TG.
跨域少镜头医学图像分割(CD-FSMIS)通常涉及在大规模源域数据集(例如,自然图像数据集)上进行预训练,然后转移到具有有限数据的目标域进行逐像素分割。然而,由于自然图像和医学图像之间存在明显的领域差距,现有的Few-shot Segmentation (FSS)方法在跨领域场景下性能下降严重。我们发现,仅使用带注释的掩码作为跨领域线索是不够的,而丰富的文本信息可以有效地在视觉实例和语言描述之间建立知识关系,减轻领域转移。为了解决这个问题,我们提出了一个插件跨域文本引导(CD-TG)模块,它利用文本域对齐来构建一个新的域泛化对齐空间。该插件模块由两个部分组成,其中:(1)文本生成单元(Text Generation Unit)利用GPT-4问答系统生成标准化的类别级文本描述;(2)语义引导单元(Semantic-guided Unit)将视觉特征与文本嵌入对齐,同时结合现有掩码信息。我们将该插件模块集成到五种主流的FSS方法中,并在四种广泛使用的医学图像数据集上对其进行了评估,实验结果证明了其有效性。代码可从https://github.com/Lilacis/CD_TG获得。
{"title":"Leveraging language to generalize natural images to few-shot medical image segmentation","authors":"Feifan Song ,&nbsp;Yuntian Bo ,&nbsp;Shidong Wang ,&nbsp;Yang Long ,&nbsp;Haofeng Zhang","doi":"10.1016/j.patrec.2026.01.009","DOIUrl":"10.1016/j.patrec.2026.01.009","url":null,"abstract":"<div><div>Cross-domain Few-shot Medical Image Segmentation (CD-FSMIS) typically involves pre-training on a large-scale source domain dataset (e.g., natural image dataset) before transferring to a target domain with limited data for pixel-wise segmentation. However, due to the significant domain gap between natural images and medical images, existing Few-shot Segmentation (FSS) methods suffer from severe performance degradation in cross-domain scenarios. We observe that using only annotated masks as cross-domain cues is insufficient, while rich textual information can effectively establish knowledge relationships between visual instances and language descriptions, mitigating domain shift. To address this, we propose a plug-in Cross-domain Text-guided (CD-TG) module that leverages text-domain alignment to construct a new alignment space for domain generalization. This plug-in module consists of two components, including: (1) Text Generation Unit that utilizes the GPT-4 question-answering system to generate standardized category-level textual descriptions, and (2) Semantic-guided Unit that aligns visual features with textual embeddings while incorporating existing mask information. We integrate this plug-in module into five mainstream FSS methods and evaluate it on four widely used medical image datasets, and the experimental results demonstrate its effectiveness. Code is available at <span><span>https://github.com/Lilacis/CD_TG</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"201 ","pages":"Pages 66-72"},"PeriodicalIF":3.3,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145979047","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Nighttime flare removal via frequency decoupling 通过频率解耦去除夜间耀斑
IF 3.3 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-03-01 Epub Date: 2026-01-13 DOI: 10.1016/j.patrec.2026.01.012
Minglong Xue , Aoxiang Ning , Jinhong He , Shuaibin Fan , Senming Zhong
Existing methods for nighttime flare removal struggle to effectively decouple flare features from normal image texture features, frequently resulting in the loss of local details after flare removal. Due to the differences in the frequency distribution characteristics of flare and content structural information—where flare and lighting information are primarily concentrated in the low-frequency range, while content structural information is concentrated in the high-frequency range—we propose a frequency decoupling de-flare network (FDDNet). This method effectively decouples flare from content, enabling efficient flare removal. Specifically, the network consists of the Frequency Decoupling Module (FDM) and the Frequency Fusion Module (FFM). The FDM divides the image’s frequency features into low-frequency and high-frequency components by setting masks. It dynamically optimizes its weights to effectively decouple flare from content while maximizing the retention of structural content information. In addition, based on the traditional skip connections, we propose the Frequency Fusion Module. The module separately fuses the amplitude and phase of features from both the encoding and decoding stages, reducing the impact of flare and brightness anomalies on the reconstructed image while repairing local damage caused by flare removal. Extensive experiments show that our method significantly improves the performance of nighttime flare removal.
现有的夜间耀斑去除方法难以有效地将耀斑特征与正常图像纹理特征解耦,经常导致耀斑去除后局部细节的丢失。由于照明弹和内容结构信息的频率分布特征不同(照明弹和照明信息主要集中在低频范围,而内容结构信息主要集中在高频范围),本文提出了一种频率解耦去照明弹网络(FDDNet)。这种方法有效地将火炬从内容中分离出来,从而实现有效的火炬移除。具体来说,该网络由频率解耦模块(FDM)和频率融合模块(FFM)组成。FDM通过设置掩模将图像的频率特征分为低频和高频分量。它动态优化其权重,以有效地从内容中分离耀斑,同时最大限度地保留结构内容信息。此外,在传统跳线连接的基础上,提出了频率融合模块。该模块分别对编码和解码阶段的特征幅度和相位进行融合,减少耀斑和亮度异常对重构图像的影响,同时修复耀斑去除造成的局部损伤。大量实验表明,该方法显著提高了夜间耀斑去除的性能。
{"title":"Nighttime flare removal via frequency decoupling","authors":"Minglong Xue ,&nbsp;Aoxiang Ning ,&nbsp;Jinhong He ,&nbsp;Shuaibin Fan ,&nbsp;Senming Zhong","doi":"10.1016/j.patrec.2026.01.012","DOIUrl":"10.1016/j.patrec.2026.01.012","url":null,"abstract":"<div><div>Existing methods for nighttime flare removal struggle to effectively decouple flare features from normal image texture features, frequently resulting in the loss of local details after flare removal. Due to the differences in the frequency distribution characteristics of flare and content structural information—where flare and lighting information are primarily concentrated in the low-frequency range, while content structural information is concentrated in the high-frequency range—we propose a frequency decoupling de-flare network (FDDNet). This method effectively decouples flare from content, enabling efficient flare removal. Specifically, the network consists of the Frequency Decoupling Module (FDM) and the Frequency Fusion Module (FFM). The FDM divides the image’s frequency features into low-frequency and high-frequency components by setting masks. It dynamically optimizes its weights to effectively decouple flare from content while maximizing the retention of structural content information. In addition, based on the traditional skip connections, we propose the Frequency Fusion Module. The module separately fuses the amplitude and phase of features from both the encoding and decoding stages, reducing the impact of flare and brightness anomalies on the reconstructed image while repairing local damage caused by flare removal. Extensive experiments show that our method significantly improves the performance of nighttime flare removal.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"201 ","pages":"Pages 73-79"},"PeriodicalIF":3.3,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145979045","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Signature-in-signature: A fidelity-preserving and usability-ensuring framework for dynamic handwritten signature protection 签名中的签名:动态手写签名保护的保真度和可用性框架
IF 3.3 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-03-01 Epub Date: 2026-01-07 DOI: 10.1016/j.patrec.2026.01.001
Tianyu Chen, Qi Cui, Zhangjie Fu
Dynamic handwritten signature (DHS) verification is widely used for identity authentication in modern applications, offering a blend of convenience and security. However, traditional verification processes necessitate users to upload multiple signature templates to remote application servers, raising critical risks to the privacy and security of sensitive data. To address these concerns, this paper proposes a robust watermarking framework for DHS data. By embedding unique watermarks as digital signatures into DHS data, the framework ensures effective traceability of DHS, allowing the identification of sources in case of DHS misuse or leakage. Specifically, we introduce a velocity-based loss function that minimizes trajectory distortion during the watermark embedding process, effectively preserving the fidelity of the DHS. In parallel, the training process leverages contrastive learning to ensure that the watermarked DHS remains closer to the original signature in feature space than to other DHS templates. This design guarantees that the usability of the watermarked DHS is unaffected, maintaining the accuracy and reliability of signature verification systems. Extensive experiments conducted on the large-scale dynamic signature dataset demonstrate that the watermarked signatures retain visual integrity and remain imperceptible to human observation. Furthermore, the embedded watermarks exhibit compatibility with a wide range of existing verification methods, ensuring that the framework does not compromise existing verification performance.
动态手写签名(DHS)验证在现代应用程序中广泛用于身份验证,它提供了便利性和安全性。然而,传统的验证过程需要用户将多个签名模板上传到远程应用服务器,这给敏感数据的隐私和安全带来了严重的风险。为了解决这些问题,本文提出了一种针对国土安全部数据的鲁棒水印框架。通过将独特的水印作为数字签名嵌入到国土安全部数据中,该框架确保了国土安全部的有效可追溯性,在国土安全部误用或泄漏的情况下可以识别来源。具体来说,我们引入了一个基于速度的损失函数,该函数在水印嵌入过程中最大限度地减少了轨迹失真,有效地保持了DHS的保真度。同时,训练过程利用对比学习来确保水印的DHS在特征空间中比其他DHS模板更接近原始签名。这种设计保证了水印DHS的可用性不受影响,保证了签名验证系统的准确性和可靠性。在大规模动态签名数据集上进行的大量实验表明,水印后的签名保持了视觉完整性,并且不被人类观察到。此外,嵌入的水印显示出与广泛的现有验证方法的兼容性,确保该框架不会损害现有的验证性能。
{"title":"Signature-in-signature: A fidelity-preserving and usability-ensuring framework for dynamic handwritten signature protection","authors":"Tianyu Chen,&nbsp;Qi Cui,&nbsp;Zhangjie Fu","doi":"10.1016/j.patrec.2026.01.001","DOIUrl":"10.1016/j.patrec.2026.01.001","url":null,"abstract":"<div><div>Dynamic handwritten signature (DHS) verification is widely used for identity authentication in modern applications, offering a blend of convenience and security. However, traditional verification processes necessitate users to upload multiple signature templates to remote application servers, raising critical risks to the privacy and security of sensitive data. To address these concerns, this paper proposes a robust watermarking framework for DHS data. By embedding unique watermarks as digital signatures into DHS data, the framework ensures effective traceability of DHS, allowing the identification of sources in case of DHS misuse or leakage. Specifically, we introduce a velocity-based loss function that minimizes trajectory distortion during the watermark embedding process, effectively preserving the fidelity of the DHS. In parallel, the training process leverages contrastive learning to ensure that the watermarked DHS remains closer to the original signature in feature space than to other DHS templates. This design guarantees that the usability of the watermarked DHS is unaffected, maintaining the accuracy and reliability of signature verification systems. Extensive experiments conducted on the large-scale dynamic signature dataset demonstrate that the watermarked signatures retain visual integrity and remain imperceptible to human observation. Furthermore, the embedded watermarks exhibit compatibility with a wide range of existing verification methods, ensuring that the framework does not compromise existing verification performance.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"201 ","pages":"Pages 45-51"},"PeriodicalIF":3.3,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145979044","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Data augmentation in time series forecasting through inverted framework 利用倒框架进行时间序列预测的数据扩充
IF 3.3 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-03-01 Epub Date: 2026-01-23 DOI: 10.1016/j.patrec.2026.01.019
Hongming Tan , Ting Chen , Ruochong Jin , Wai Kin Victor Chan
Currently, iTransformer is one of the most popular and effective models for multivariate time series (MTS) forecasting. Thanks to its inverted framework, iTransformer effectively captures multivariate correlation. However, the inverted framework still has some limitations. It diminishes temporal interdependency information, and introduces noise in cases of nonsignificant variable correlation. To address these limitations, we introduce a novel data augmentation method on inverted framework, called DAIF. Unlike previous data augmentation methods, DAIF stands out as the first real-time augmentation specifically designed for the inverted framework in MTS forecasting. We first define the structure of the inverted sequence-to-sequence framework, then propose two different DAIF strategies, Frequency Filtering and Cross-variation Patching to address the existing challenges of the inverted framework. Experiments across multiple datasets and inverted models have demonstrated the effectiveness of our DAIF. Our codes are available at https://github.com/Travistan123/time-series-daif.
iTransformer模型是目前最流行、最有效的多变量时间序列预测模型之一。由于其倒置的框架,ittransformer有效地捕获了多变量相关性。然而,倒框架仍然有一些局限性。它减少了时间相互依赖信息,并在变量相关性不显著的情况下引入噪声。为了解决这些限制,我们在倒框架上引入了一种新的数据增强方法,称为DAIF。与以往的数据增强方法不同,DAIF是第一个专门为MTS预测中的倒框架设计的实时增强方法。我们首先定义了反向序列到序列框架的结构,然后提出了两种不同的DAIF策略,频率滤波和交叉变异补丁,以解决反向框架存在的挑战。跨多个数据集和倒置模型的实验证明了我们的DAIF的有效性。我们的代码可在https://github.com/Travistan123/time-series-daif上获得。
{"title":"Data augmentation in time series forecasting through inverted framework","authors":"Hongming Tan ,&nbsp;Ting Chen ,&nbsp;Ruochong Jin ,&nbsp;Wai Kin Victor Chan","doi":"10.1016/j.patrec.2026.01.019","DOIUrl":"10.1016/j.patrec.2026.01.019","url":null,"abstract":"<div><div>Currently, iTransformer is one of the most popular and effective models for multivariate time series (MTS) forecasting. Thanks to its inverted framework, iTransformer effectively captures multivariate correlation. However, the inverted framework still has some limitations. It diminishes temporal interdependency information, and introduces noise in cases of nonsignificant variable correlation. To address these limitations, we introduce a novel data augmentation method on inverted framework, called DAIF. Unlike previous data augmentation methods, DAIF stands out as the first real-time augmentation specifically designed for the inverted framework in MTS forecasting. We first define the structure of the inverted sequence-to-sequence framework, then propose two different DAIF strategies, Frequency Filtering and Cross-variation Patching to address the existing challenges of the inverted framework. Experiments across multiple datasets and inverted models have demonstrated the effectiveness of our DAIF. Our codes are available at <span><span>https://github.com/Travistan123/time-series-daif</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"201 ","pages":"Pages 152-159"},"PeriodicalIF":3.3,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146090253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Voxel-MPI: Scene-adaptive multiplane images based local voxel tokenization with attention coordination for 3D scene representation 体素- mpi:基于局部体素标记的场景自适应多平面图像,具有三维场景表示的注意协调
IF 3.3 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-03-01 Epub Date: 2026-01-23 DOI: 10.1016/j.patrec.2026.01.006
Yu Liu, Xin Ding, Qiong Liu
With the continuous optimization of learning models, 3D scene reconstruction for novel view synthesis has witnessed remarkable progress and rapid development in recent years. Compared to mainstream 3D reconstruction methods such as NeRF and 3DGS, the Multiplane Image (MPI) method demonstrates a significant balance between computational efficiency and the preservation of global structure. To enhance scene details, some studies combine local planes with global Multilayer Perceptron (MLP) learning for MPI representation. However, the inherent global consistency of global MLP networks hinders the adaptive learning of local density information, leading to the loss of local geometric and texture details in the rendered images. To address this issue, we propose a method called Voxel-MPI, which adaptively enhances local texture representation in MPI. First, we voxelize the global MPI and encode an independent MLP network for the local MPI of each voxel, enabling adaptive learning of local scene information. Next, independently learning each local MPI can lead to inconsistent rendering between blocks, causing blocky artifacts. To mitigate this, we design a Voxel Attention Block that coordinates information learned across voxel-based local MPI at the same depth, ensuring consistency and coherence in scene rendering. Experimental results demonstrate that our method outperforms existing methods on widely used real-world datasets.
随着学习模型的不断优化,基于新视角合成的三维场景重建技术近年来取得了显著的进步和快速的发展。与NeRF和3DGS等主流三维重建方法相比,MPI方法在计算效率和全局结构保护之间取得了显著的平衡。为了增强场景细节,一些研究将局部平面与全局多层感知器(MLP)学习结合起来进行MPI表示。然而,全局MLP网络固有的全局一致性阻碍了局部密度信息的自适应学习,导致渲染图像中局部几何和纹理细节的丢失。为了解决这个问题,我们提出了一种称为体素-MPI的方法,该方法自适应地增强了MPI中的局部纹理表示。首先,我们将全局MPI体素化,并为每个体素的局部MPI编码一个独立的MLP网络,实现局部场景信息的自适应学习。接下来,独立学习每个局部MPI可能导致块之间呈现不一致,从而导致块工件。为了缓解这一问题,我们设计了一个体素注意力块,协调在相同深度下基于体素的局部MPI学习到的信息,确保场景渲染的一致性和连贯性。实验结果表明,该方法在广泛使用的实际数据集上优于现有方法。
{"title":"Voxel-MPI: Scene-adaptive multiplane images based local voxel tokenization with attention coordination for 3D scene representation","authors":"Yu Liu,&nbsp;Xin Ding,&nbsp;Qiong Liu","doi":"10.1016/j.patrec.2026.01.006","DOIUrl":"10.1016/j.patrec.2026.01.006","url":null,"abstract":"<div><div>With the continuous optimization of learning models, 3D scene reconstruction for novel view synthesis has witnessed remarkable progress and rapid development in recent years. Compared to mainstream 3D reconstruction methods such as NeRF and 3DGS, the Multiplane Image (MPI) method demonstrates a significant balance between computational efficiency and the preservation of global structure. To enhance scene details, some studies combine local planes with global Multilayer Perceptron (MLP) learning for MPI representation. However, the inherent global consistency of global MLP networks hinders the adaptive learning of local density information, leading to the loss of local geometric and texture details in the rendered images. To address this issue, we propose a method called Voxel-MPI, which adaptively enhances local texture representation in MPI. First, we voxelize the global MPI and encode an independent MLP network for the local MPI of each voxel, enabling adaptive learning of local scene information. Next, independently learning each local MPI can lead to inconsistent rendering between blocks, causing blocky artifacts. To mitigate this, we design a Voxel Attention Block that coordinates information learned across voxel-based local MPI at the same depth, ensuring consistency and coherence in scene rendering. Experimental results demonstrate that our method outperforms existing methods on widely used real-world datasets.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"201 ","pages":"Pages 168-173"},"PeriodicalIF":3.3,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146090255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MM-Net: Facial expression recognition based on multi-level and multi-scale attention mechanisms 基于多层次多尺度注意机制的面部表情识别
IF 3.3 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-03-01 Epub Date: 2026-01-13 DOI: 10.1016/j.patrec.2026.01.007
Dongjing Wang , Hao Peng , Xin Zhang , Na Li , Wenxiu Wang , Jinlin Zhu , Shuiguang Deng
Facial expression recognition (FER) has gained significant attention due to its diverse applications. Achieving accurate facial expression recognition requires the consideration of both global central features and subtle local features. To this end, we propose the Multi-level and Multi-scale Network (MM-Net), an FER network that leverages both multi-level and multi-scale attention mechanisms. Specifically, we design a multi-hierarchical feature learning mechanism to facilitate the FER task with Multi-Level Attention Block (MLAB) and Multi-scale Attention Block (MSAB). The MLAB focuses on learning fine-grained features with adaptive attention across different blocks in the shallow network. Meanwhile, the MSAB facilitates the multi-scale fusion of deep features, enabling the network to capture richer semantic information and expression of features. In addition, we propose Limited Center Loss, which optimizes the network by minimizing the distance between the same classes while increasing the gap between different classes. Experimental results on public datasets show that our proposed MM-Net outperforms current state-of-the-art methods, achieving results of 90.42% on the RAF-DB dataset, 90.05% on FERPlus, 65.91% on AffectNet, and 57.52% on SFEW.
面部表情识别由于其广泛的应用而受到广泛的关注。实现准确的面部表情识别需要同时考虑全局中心特征和微妙的局部特征。为此,我们提出了多层次和多尺度网络(MM-Net),这是一个利用多层次和多尺度注意机制的FER网络。具体来说,我们设计了一个多层次特征学习机制,以促进多层次注意块(MLAB)和多尺度注意块(MSAB)的FER任务。MLAB的重点是学习细粒度的特征,并在浅网络的不同块上使用自适应注意力。同时,MSAB有利于深度特征的多尺度融合,使网络能够捕获更丰富的语义信息和特征表达。此外,我们提出了有限中心损失,它通过最小化相同类别之间的距离而增加不同类别之间的差距来优化网络。在公共数据集上的实验结果表明,我们提出的MM-Net方法优于目前最先进的方法,在RAF-DB数据集上的结果为90.42%,在FERPlus上的结果为90.05%,在AffectNet上的结果为65.91%,在SFEW上的结果为57.52%。
{"title":"MM-Net: Facial expression recognition based on multi-level and multi-scale attention mechanisms","authors":"Dongjing Wang ,&nbsp;Hao Peng ,&nbsp;Xin Zhang ,&nbsp;Na Li ,&nbsp;Wenxiu Wang ,&nbsp;Jinlin Zhu ,&nbsp;Shuiguang Deng","doi":"10.1016/j.patrec.2026.01.007","DOIUrl":"10.1016/j.patrec.2026.01.007","url":null,"abstract":"<div><div>Facial expression recognition (FER) has gained significant attention due to its diverse applications. Achieving accurate facial expression recognition requires the consideration of both global central features and subtle local features. To this end, we propose the Multi-level and Multi-scale Network (MM-Net), an FER network that leverages both multi-level and multi-scale attention mechanisms. Specifically, we design a multi-hierarchical feature learning mechanism to facilitate the FER task with Multi-Level Attention Block (MLAB) and Multi-scale Attention Block (MSAB). The MLAB focuses on learning fine-grained features with adaptive attention across different blocks in the shallow network. Meanwhile, the MSAB facilitates the multi-scale fusion of deep features, enabling the network to capture richer semantic information and expression of features. In addition, we propose Limited Center Loss, which optimizes the network by minimizing the distance between the same classes while increasing the gap between different classes. Experimental results on public datasets show that our proposed MM-Net outperforms current state-of-the-art methods, achieving results of 90.42% on the RAF-DB dataset, 90.05% on FERPlus, 65.91% on AffectNet, and 57.52% on SFEW.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"201 ","pages":"Pages 87-94"},"PeriodicalIF":3.3,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146038423","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
End-to-end interactive joint model: Clause-phrase multi-task learning for suicidal ideation cause extraction (SICE) in Chinese Weibo text 端到端交互联合模型:中文微博文本自杀意念原因抽取的子句-短语多任务学习
IF 3.3 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-02-01 Epub Date: 2025-11-27 DOI: 10.1016/j.patrec.2025.11.036
Qi Fu , Yuhao Zhang , Dexi Liu , Liyuan Zhang , Wenzhong Peng
Suicide prevention has been a critical research focus for governments, mental health professionals, and social work researchers worldwide. With the increasing number of individuals seeking help through social networks and psychological counseling platforms, timely analysis of the causes of Suicidal Ideation (SI) in help-seeking texts can provide scientific evidence and actionable insights for suicide prevention efforts. Existing approaches face challenges: (i) SIC clause extraction is coarse-grained and thus imprecise in localization; (ii) SIC phrase extraction is more precise but inherently harder. To address this, we propose an end-to-end interactive joint model (EIJM) based on a clause-phrase multi-task learning (MTL) framework, where SIC phrase extraction serves as the main task and SIC clause extraction as the auxiliary task. By leveraging joint learning, EIJM enhances extraction accuracy while reducing task difficulty. Experimental results demonstrate that EIJM outperforms the two-stage independent multi-task (2SIM) approach across multiple evaluation metrics. Specifically, in the SIC phrase extraction task, EIJM achieves a 1.1 % improvement in recall over 2SIM without compromising precision. In the SIC clause extraction task, EIJM improves precision, recall, and F1-score by 0.4 %, 0.9 %, and 0.7 %, respectively. Furthermore, in 2SIM, incorporating clause-level representations from the auxiliary task into the main task enhances local matching and fuzzy matching metrics, with the Fuzzy Match method improving the most by 0.9 %. However, it yielded limited improvement in exact matching performance.
自杀预防一直是世界各国政府、精神卫生专业人员和社会工作研究人员的重要研究重点。随着越来越多的人通过社交网络和心理咨询平台寻求帮助,及时分析求助文本中自杀意念(SI)的原因可以为自杀预防工作提供科学依据和可操作的见解。现有的方法面临挑战:(1)SIC子句提取是粗粒度的,因此定位不精确;(ii) SIC短语提取更精确,但本质上更困难。为了解决这个问题,我们提出了一个基于子句-短语多任务学习(MTL)框架的端到端交互联合模型(EIJM),其中SIC短语提取作为主任务,SIC小句提取作为辅助任务。通过利用联合学习,EIJM提高了提取的准确性,同时降低了任务难度。实验结果表明,EIJM在多个评价指标上优于两阶段独立多任务(2SIM)方法。具体来说,在SIC短语提取任务中,EIJM在不影响精度的情况下,在2SIM的召回率上提高了1.1%。在SIC子句提取任务中,EIJM分别将准确率、召回率和f1分数提高了0.4%、0.9%和0.7%。此外,在2SIM中,将辅助任务的子句级表示合并到主任务中可以增强局部匹配和模糊匹配度量,其中模糊匹配方法的改进幅度最大,提高了0.9%。然而,它在精确匹配性能方面的改进有限。
{"title":"End-to-end interactive joint model: Clause-phrase multi-task learning for suicidal ideation cause extraction (SICE) in Chinese Weibo text","authors":"Qi Fu ,&nbsp;Yuhao Zhang ,&nbsp;Dexi Liu ,&nbsp;Liyuan Zhang ,&nbsp;Wenzhong Peng","doi":"10.1016/j.patrec.2025.11.036","DOIUrl":"10.1016/j.patrec.2025.11.036","url":null,"abstract":"<div><div>Suicide prevention has been a critical research focus for governments, mental health professionals, and social work researchers worldwide. With the increasing number of individuals seeking help through social networks and psychological counseling platforms, timely analysis of the causes of Suicidal Ideation (SI) in help-seeking texts can provide scientific evidence and actionable insights for suicide prevention efforts. Existing approaches face challenges: (i) SIC clause extraction is coarse-grained and thus imprecise in localization; (ii) SIC phrase extraction is more precise but inherently harder. To address this, we propose an end-to-end interactive joint model (EIJM) based on a clause-phrase multi-task learning (MTL) framework, where SIC phrase extraction serves as the main task and SIC clause extraction as the auxiliary task. By leveraging joint learning, EIJM enhances extraction accuracy while reducing task difficulty. Experimental results demonstrate that EIJM outperforms the two-stage independent multi-task (2SIM) approach across multiple evaluation metrics. Specifically, in the SIC phrase extraction task, EIJM achieves a 1.1 % improvement in recall over 2SIM without compromising precision. In the SIC clause extraction task, EIJM improves precision, recall, and F1-score by 0.4 %, 0.9 %, and 0.7 %, respectively. Furthermore, in 2SIM, incorporating clause-level representations from the auxiliary task into the main task enhances local matching and fuzzy matching metrics, with the Fuzzy Match method improving the most by 0.9 %. However, it yielded limited improvement in exact matching performance.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"200 ","pages":"Pages 1-7"},"PeriodicalIF":3.3,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145658791","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An illumination-robust feature decomposition approach for low-light crowd counting 弱光人群计数的光照鲁棒特征分解方法
IF 3.3 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-02-01 Epub Date: 2025-12-16 DOI: 10.1016/j.patrec.2025.12.005
Jian Cheng, Chen Feng, Yang Xiao, Zhiguo Cao
Crowd counting is widely studied, yet its reliability in low-light environments remains underexplored. Regular counters fail to perform well due to poor image quality; applying image enhancement pre-processing yields limited improvement; and introducing additional thermal inputs increases cost. This study presents an approach that only requires annotated normal-light RGB data. To learn illumination-robust representations, we construct normal- and low-light image pairs and decompose their features into common and unique components. The common components preserve shared thus illumination-robust information, so they are optimized for density map prediction. We also introduce a dataset for evaluating crowd counting performance in low-light conditions. Experiments show that our approach consistently improves performance on multiple baseline architectures with negligible computational overhead. The source code and dataset will be made publicly available upon acceptance at https://github.com/hustaia/Feature_Decomposition_Counting.
人群计数被广泛研究,但其在低光环境下的可靠性仍有待探索。普通计数器由于图像质量差而表现不佳;图像增强预处理的效果有限;引入额外的热输入会增加成本。本研究提出了一种只需要注释的正常光RGB数据的方法。为了学习光照鲁棒表示,我们构建正常光照和低光照图像对,并将其特征分解为共同和独特的组件。公共分量保留了共享的光照鲁棒信息,因此它们被优化用于密度图预测。我们还介绍了一个用于评估低光照条件下人群计数性能的数据集。实验表明,我们的方法在计算开销可以忽略不计的情况下持续提高多个基准架构的性能。源代码和数据集在接受后将在https://github.com/hustaia/Feature_Decomposition_Counting上公开提供。
{"title":"An illumination-robust feature decomposition approach for low-light crowd counting","authors":"Jian Cheng,&nbsp;Chen Feng,&nbsp;Yang Xiao,&nbsp;Zhiguo Cao","doi":"10.1016/j.patrec.2025.12.005","DOIUrl":"10.1016/j.patrec.2025.12.005","url":null,"abstract":"<div><div>Crowd counting is widely studied, yet its reliability in low-light environments remains underexplored. Regular counters fail to perform well due to poor image quality; applying image enhancement pre-processing yields limited improvement; and introducing additional thermal inputs increases cost. This study presents an approach that only requires annotated normal-light RGB data. To learn illumination-robust representations, we construct normal- and low-light image pairs and decompose their features into common and unique components. The common components preserve shared thus illumination-robust information, so they are optimized for density map prediction. We also introduce a dataset for evaluating crowd counting performance in low-light conditions. Experiments show that our approach consistently improves performance on multiple baseline architectures with negligible computational overhead. The source code and dataset will be made publicly available upon acceptance at <span><span>https://github.com/hustaia/Feature_Decomposition_Counting</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"200 ","pages":"Pages 108-114"},"PeriodicalIF":3.3,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145840466","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TriGAN-SiaMT: A triple-segmentor adversarial network with bounding box priors for semi-supervised brain lesion segmentation TriGAN-SiaMT:一种具有边界盒先验的半监督脑损伤分割的三分段对抗网络
IF 3.3 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-02-01 Epub Date: 2025-11-29 DOI: 10.1016/j.patrec.2025.11.032
Mohammad Alshurbaji , Maregu Assefa , Ahmad Obeid , Mohamed L. Seghier , Taimur Hassan , Kamal Taha , Naoufel Werghi
Accurate brain lesion segmentation in MRI is critical for clinical decision-making, but pixel-wise annotations remain costly and time-consuming. We propose TriGAN-SiaMT, a novel semi-supervised segmentation framework that combines adversarial learning, consistency regularization, and bounding box priors. Our architecture comprises three segmentors (S0, S1, S2) and two discriminators (D0, D1). It includes: (1) a supervised branch (S0D0) trained on a small labeled subset; (2) a Siamese branch (S1D1) with an identical architecture to S0D0, but trained on unlabeled data; and (3) a teacher branch (S2) updated via exponential moving average (EMA) from S1, following the Mean Teacher (MT) paradigm. The teacher S2 generates pseudo-labels to supervise S1. It also provides soft segmentations to guide D1, which does not see any labeled data. The model enforces consistency at multiple levels: between S0 and S1 (Siamese consistency), and between S1 and S2 (EMA consistency). Bounding box priors are incorporated as weak supervision for both labeled and unlabeled images, improving lesion localization. Evaluated on the ISLES 2022 and BraTS 2019 datasets, TriGAN-SiaMT achieves DSC scores of 84.80 % and 86.32 %, respectively, using only 5 % labeled data. These results demonstrate strong performance under limited supervision and robust generalization across brain lesions.
MRI中准确的脑病变分割对临床决策至关重要,但像素化注释仍然昂贵且耗时。我们提出了TriGAN-SiaMT,一种结合了对抗学习、一致性正则化和边界盒先验的新型半监督分割框架。我们的架构包括三个分段器(S0, S1, S2)和两个鉴别器(D0, D1)。它包括:(1)在一个小的有标签子集上训练的监督分支(S0↔D0);(2)一个Siamese分支(S1↔D1),其结构与S0↔D0相同,但训练在未标记数据上;(3)教师分支(S2)通过指数移动平均线(EMA)从S1更新,遵循平均教师(MT)范式。老师S2生成伪标签来监督S1。它还提供软分割来引导D1,它看不到任何标记数据。该模型在多个级别强制一致性:在S0和S1之间(Siamese一致性),以及在S1和S2之间(EMA一致性)。结合边界盒先验作为对标记和未标记图像的弱监督,提高病灶定位。在ISLES 2022和BraTS 2019数据集上进行评估,仅使用5%的标记数据,TriGAN-SiaMT的DSC得分分别为84.80%和86.32%。这些结果表明,在有限的监督下,这些结果具有很强的性能,并且在脑病变中具有强大的泛化性。
{"title":"TriGAN-SiaMT: A triple-segmentor adversarial network with bounding box priors for semi-supervised brain lesion segmentation","authors":"Mohammad Alshurbaji ,&nbsp;Maregu Assefa ,&nbsp;Ahmad Obeid ,&nbsp;Mohamed L. Seghier ,&nbsp;Taimur Hassan ,&nbsp;Kamal Taha ,&nbsp;Naoufel Werghi","doi":"10.1016/j.patrec.2025.11.032","DOIUrl":"10.1016/j.patrec.2025.11.032","url":null,"abstract":"<div><div>Accurate brain lesion segmentation in MRI is critical for clinical decision-making, but pixel-wise annotations remain costly and time-consuming. We propose TriGAN-SiaMT, a novel semi-supervised segmentation framework that combines adversarial learning, consistency regularization, and bounding box priors. Our architecture comprises three segmentors (<em>S</em><sub>0</sub>, <em>S</em><sub>1</sub>, <em>S</em><sub>2</sub>) and two discriminators (<em>D</em><sub>0</sub>, <em>D</em><sub>1</sub>). It includes: (1) a supervised branch (<em>S</em><sub>0</sub>↔<em>D</em><sub>0</sub>) trained on a small labeled subset; (2) a Siamese branch (<em>S</em><sub>1</sub>↔<em>D</em><sub>1</sub>) with an identical architecture to <em>S</em><sub>0</sub>↔<em>D</em><sub>0</sub>, but trained on unlabeled data; and (3) a teacher branch (<em>S</em><sub>2</sub>) updated via exponential moving average (EMA) from <em>S</em><sub>1</sub>, following the Mean Teacher (MT) paradigm. The teacher <em>S</em><sub>2</sub> generates pseudo-labels to supervise <em>S</em><sub>1</sub>. It also provides soft segmentations to guide <em>D</em><sub>1</sub>, which does not see any labeled data. The model enforces consistency at multiple levels: between <em>S</em><sub>0</sub> and <em>S</em><sub>1</sub> (Siamese consistency), and between <em>S</em><sub>1</sub> and <em>S</em><sub>2</sub> (EMA consistency). Bounding box priors are incorporated as weak supervision for both labeled and unlabeled images, improving lesion localization. Evaluated on the ISLES 2022 and BraTS 2019 datasets, TriGAN-SiaMT achieves DSC scores of 84.80 % and 86.32 %, respectively, using only 5 % labeled data. These results demonstrate strong performance under limited supervision and robust generalization across brain lesions.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"200 ","pages":"Pages 37-43"},"PeriodicalIF":3.3,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145694687","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improving pseudo-labelling for semi-supervised single-class instance segmentation via mask symmetry scoring 基于掩码对称评分的半监督单类实例分割伪标记改进
IF 3.3 3区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-02-01 Epub Date: 2025-12-02 DOI: 10.1016/j.patrec.2025.11.044
Bradley Hurst , Nicola Bellotto , Petra Bosilj
Semi-supervised teacher-student pseudo-labelling improves instance segmentation by exploiting unlabelled data, where a teacher network, trained with a small annotated dataset, generates pseudo labels for the remaining data, to train the student model. However, mask selection typically relies heavily on the class confidence scores. In single-class settings these scores saturate, offering little discrimination between masks. In this work we propose a mask symmetry score that evaluates logits from the mask prediction head, enabling more reliable pseudo-label selection without architectural changes. Evaluations on both CNN- and Transformer-based models show our method outperforms state-of-the-art approaches on a real-world agri-robotic dataset of densely clustered potato tubers.
半监督师生伪标记通过利用未标记的数据来改进实例分割,其中一个教师网络,用一个小的带注释的数据集训练,为剩余的数据生成伪标签,以训练学生模型。然而,面具选择通常严重依赖于班级置信度得分。在单一类别设置中,这些分数饱和,在面具之间几乎没有区别。在这项工作中,我们提出了一个掩码对称评分,用于评估来自掩码预测头的logits,从而在不改变架构的情况下实现更可靠的伪标签选择。对基于CNN和transformer的模型的评估表明,我们的方法在密集聚集的马铃薯块茎的真实农业机器人数据集上优于最先进的方法。
{"title":"Improving pseudo-labelling for semi-supervised single-class instance segmentation via mask symmetry scoring","authors":"Bradley Hurst ,&nbsp;Nicola Bellotto ,&nbsp;Petra Bosilj","doi":"10.1016/j.patrec.2025.11.044","DOIUrl":"10.1016/j.patrec.2025.11.044","url":null,"abstract":"<div><div>Semi-supervised teacher-student pseudo-labelling improves instance segmentation by exploiting unlabelled data, where a teacher network, trained with a small annotated dataset, generates pseudo labels for the remaining data, to train the student model. However, mask selection typically relies heavily on the class confidence scores. In single-class settings these scores saturate, offering little discrimination between masks. In this work we propose a mask symmetry score that evaluates logits from the mask prediction head, enabling more reliable pseudo-label selection without architectural changes. Evaluations on both CNN- and Transformer-based models show our method outperforms state-of-the-art approaches on a real-world agri-robotic dataset of densely clustered potato tubers.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"200 ","pages":"Pages 60-66"},"PeriodicalIF":3.3,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145749053","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Pattern Recognition Letters
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1