首页 > 最新文献

IET Biometrics最新文献

英文 中文
Exploring Static–Dynamic ID Matching and Temporal Static ID Inconsistency for Generalizable Deepfake Detection 探索静态-动态 ID 匹配和时态静态 ID 不一致性,实现可通用的深度伪造检测
IF 1.8 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-06-09 DOI: 10.1049/2024/2280143
Huimin She, Yongjian Hu, Beibei Liu, Chang-Tsun Li

Identity-based Deepfake detection methods have the potential to improve the generalization, robustness, and interpretability of the model. However, current identity-based methods either require a reference or can only be used to detect face replacement but not face reenactment. In this paper, we propose a novel Deepfake video detection approach based on identity anomalies. We observe two types of identity anomalies: the inconsistency between clip-level static ID (facial appearance) and clip-level dynamic ID (facial behavior) and the temporal inconsistency of image-level static IDs. Since these two types of anomalies can be detected through self-consistency and do not depend on the manipulation type, our method is a reference-free and manipulation-independent approach. Specifically, our detection network consists of two branches: the static–dynamic ID discrepancy detection branch for the inconsistency between dynamic and static ID and the temporal static ID anomaly detection branch for the temporal anomaly of static ID. We combine the outputs of the two branches by weighted averaging to obtain the final detection result. We also designed two loss functions: the static–dynamic ID matching loss and the dynamic ID constraint loss, to enhance the representation and discriminability of dynamic ID. We conduct experiments on four benchmark datasets and compare our method with the state-of-the-art methods. Results show that our method can detect not only face replacement but also face reenactment, and also has better detection performance over the state-of-the-art methods on unknown datasets. It also has superior robustness against compression. Identity-based features provide a good explanation of the detection results.

基于身份的 Deepfake 检测方法有可能提高模型的通用性、鲁棒性和可解释性。然而,目前基于身份的方法要么需要参照物,要么只能用于检测人脸替换,而不能检测人脸重现。在本文中,我们提出了一种基于身份异常的新型 Deepfake 视频检测方法。我们观察到两类身份异常:片段级静态 ID(面部外观)和片段级动态 ID(面部行为)之间的不一致性,以及图像级静态 ID 的时间不一致性。由于这两类异常可以通过自洽性检测出来,并且不依赖于操作类型,因此我们的方法是一种无参照、不依赖于操作的方法。具体来说,我们的检测网络由两个分支组成:静态-动态 ID 差异检测分支,用于检测动态 ID 和静态 ID 之间的不一致;时间静态 ID 异常检测分支,用于检测静态 ID 的时间异常。我们通过加权平均的方式将两个分支的输出结果合并,得到最终的检测结果。我们还设计了两个损失函数:静态-动态 ID 匹配损失和动态 ID 约束损失,以增强动态 ID 的代表性和可辨别性。我们在四个基准数据集上进行了实验,并将我们的方法与最先进的方法进行了比较。结果表明,我们的方法不仅能检测到人脸替换,还能检测到人脸重演,而且在未知数据集上的检测性能优于最先进的方法。此外,该方法还具有卓越的抗压缩鲁棒性。基于身份的特征很好地解释了检测结果。
{"title":"Exploring Static–Dynamic ID Matching and Temporal Static ID Inconsistency for Generalizable Deepfake Detection","authors":"Huimin She,&nbsp;Yongjian Hu,&nbsp;Beibei Liu,&nbsp;Chang-Tsun Li","doi":"10.1049/2024/2280143","DOIUrl":"10.1049/2024/2280143","url":null,"abstract":"<p>Identity-based Deepfake detection methods have the potential to improve the generalization, robustness, and interpretability of the model. However, current identity-based methods either require a reference or can only be used to detect face replacement but not face reenactment. In this paper, we propose a novel Deepfake video detection approach based on identity anomalies. We observe two types of identity anomalies: the inconsistency between clip-level static ID (facial appearance) and clip-level dynamic ID (facial behavior) and the temporal inconsistency of image-level static IDs. Since these two types of anomalies can be detected through self-consistency and do not depend on the manipulation type, our method is a reference-free and manipulation-independent approach. Specifically, our detection network consists of two branches: the static–dynamic ID discrepancy detection branch for the inconsistency between dynamic and static ID and the temporal static ID anomaly detection branch for the temporal anomaly of static ID. We combine the outputs of the two branches by weighted averaging to obtain the final detection result. We also designed two loss functions: the static–dynamic ID matching loss and the dynamic ID constraint loss, to enhance the representation and discriminability of dynamic ID. We conduct experiments on four benchmark datasets and compare our method with the state-of-the-art methods. Results show that our method can detect not only face replacement but also face reenactment, and also has better detection performance over the state-of-the-art methods on unknown datasets. It also has superior robustness against compression. Identity-based features provide a good explanation of the detection results.</p>","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":"2024 1","pages":""},"PeriodicalIF":1.8,"publicationDate":"2024-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/2024/2280143","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141298409","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Emotion Recognition Based on Handwriting Using Generative Adversarial Networks and Deep Learning 使用生成式对抗网络和深度学习进行基于手写的情感识别
IF 1.8 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-05-27 DOI: 10.1049/2024/5351588
Hengnian Qi, Gang Zeng, Keke Jia, Chu Zhang, Xiaoping Wu, Mengxia Li, Qing Lang, Lingxuan Wang

The quality of people’s lives is closely related to their emotional state. Positive emotions can boost confidence and help overcome difficulties, while negative emotions can harm both physical and mental health. Research has shown that people’s handwriting is associated with their emotions. In this study, audio-visual media were used to induce emotions, and a dot-matrix digital pen was used to collect neutral text data written by participants in three emotional states: calm, happy, and sad. To address the challenge of limited samples, a novel conditional table generative adversarial network called conditional tabular-generative adversarial network (CTAB-GAN) was used to increase the number of task samples, and the recognition accuracy of task samples improved by 4.18%. The TabNet (a neural network designed for tabular data) with SimAM (a simple, parameter-free attention module) was employed and compared with the original TabNet and traditional machine learning models; the incorporation of the SimAm attention mechanism led to a 1.35% improvement in classification accuracy. Experimental results revealed significant differences between negative (sad) and nonnegative (calm and happy) emotions, with a recognition accuracy of 80.67%. Overall, this study demonstrated the feasibility of emotion recognition based on handwriting with the assistance of CTAB-GAN and SimAm-TabNet. It provides guidance for further research on emotion recognition or other handwriting-based applications.

人们的生活质量与其情绪状态密切相关。积极的情绪可以增强信心,帮助克服困难,而消极的情绪则会损害身心健康。研究表明,人们的笔迹与其情绪有关。本研究利用视听媒体诱发情绪,并使用点阵数码笔收集参与者在平静、快乐和悲伤三种情绪状态下书写的中性文字数据。为了解决样本有限的难题,研究人员使用了一种名为条件表生成对抗网络(CTAB-GAN)的新型条件表生成对抗网络来增加任务样本的数量,结果任务样本的识别准确率提高了 4.18%。采用了带有 SimAM(一种简单、无参数的注意力模块)的 TabNet(一种专为表格数据设计的神经网络),并与原始 TabNet 和传统机器学习模型进行了比较;加入 SimAm 注意机制后,分类准确率提高了 1.35%。实验结果显示,负面(悲伤)和非负面(平静和快乐)情绪之间存在明显差异,识别准确率达到 80.67%。总之,这项研究证明了在 CTAB-GAN 和 SimAm-TabNet 的帮助下基于手写进行情绪识别的可行性。它为进一步研究情绪识别或其他基于手写的应用提供了指导。
{"title":"Emotion Recognition Based on Handwriting Using Generative Adversarial Networks and Deep Learning","authors":"Hengnian Qi,&nbsp;Gang Zeng,&nbsp;Keke Jia,&nbsp;Chu Zhang,&nbsp;Xiaoping Wu,&nbsp;Mengxia Li,&nbsp;Qing Lang,&nbsp;Lingxuan Wang","doi":"10.1049/2024/5351588","DOIUrl":"10.1049/2024/5351588","url":null,"abstract":"<p>The quality of people’s lives is closely related to their emotional state. Positive emotions can boost confidence and help overcome difficulties, while negative emotions can harm both physical and mental health. Research has shown that people’s handwriting is associated with their emotions. In this study, audio-visual media were used to induce emotions, and a dot-matrix digital pen was used to collect neutral text data written by participants in three emotional states: calm, happy, and sad. To address the challenge of limited samples, a novel conditional table generative adversarial network called conditional tabular-generative adversarial network (CTAB-GAN) was used to increase the number of task samples, and the recognition accuracy of task samples improved by 4.18%. The TabNet (a neural network designed for tabular data) with SimAM (a simple, parameter-free attention module) was employed and compared with the original TabNet and traditional machine learning models; the incorporation of the SimAm attention mechanism led to a 1.35% improvement in classification accuracy. Experimental results revealed significant differences between negative (sad) and nonnegative (calm and happy) emotions, with a recognition accuracy of 80.67%. Overall, this study demonstrated the feasibility of emotion recognition based on handwriting with the assistance of CTAB-GAN and SimAm-TabNet. It provides guidance for further research on emotion recognition or other handwriting-based applications.</p>","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":"2024 1","pages":""},"PeriodicalIF":1.8,"publicationDate":"2024-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/2024/5351588","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141246105","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Comparative Study of Cross-Device Finger Vein Recognition Using Classical and Deep Learning Approaches 使用经典方法和深度学习方法进行跨设备手指静脉识别的比较研究
IF 1.8 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-03-25 DOI: 10.1049/2024/3236602
Tuğçe Arıcan, Raymond Veldhuis, Luuk Spreeuwers, Loïc Bergeron, Christoph Busch, Ehsaneddin Jalilian, Christof Kauba, Simon Kirchgasser, Sébastien Marcel, Bernhard Prommegger, Kiran Raja, Raghavendra Ramachandra, Andreas Uhl

Finger vein recognition is gaining popularity in the field of biometrics, yet the inter-operability of finger vein patterns has received limited attention. This study aims to fill this gap by introducing a cross-device finger vein dataset and evaluating the performance of finger vein recognition across devices using a classical method, a convolutional neural network, and our proposed patch-based convolutional auto-encoder (CAE). The findings emphasise the importance of standardisation of finger vein recognition, similar to that of fingerprints or irises, crucial for achieving inter-operability. Despite the inherent challenges of cross-device recognition, the proposed CAE architecture in this study demonstrates promising results in finger vein recognition, particularly in the context of cross-device comparisons.

手指静脉识别在生物识别领域越来越受欢迎,但手指静脉模式的互操作性受到的关注却很有限。本研究旨在填补这一空白,它引入了跨设备手指静脉数据集,并使用经典方法、卷积神经网络和我们提出的基于补丁的卷积自动编码器(CAE)评估了跨设备手指静脉识别的性能。研究结果强调了手指静脉识别标准化的重要性,这与指纹或虹膜识别类似,对于实现互操作性至关重要。尽管跨设备识别存在固有的挑战,但本研究中提出的 CAE 架构在手指静脉识别方面,特别是在跨设备比较方面,显示出了良好的效果。
{"title":"A Comparative Study of Cross-Device Finger Vein Recognition Using Classical and Deep Learning Approaches","authors":"Tuğçe Arıcan,&nbsp;Raymond Veldhuis,&nbsp;Luuk Spreeuwers,&nbsp;Loïc Bergeron,&nbsp;Christoph Busch,&nbsp;Ehsaneddin Jalilian,&nbsp;Christof Kauba,&nbsp;Simon Kirchgasser,&nbsp;Sébastien Marcel,&nbsp;Bernhard Prommegger,&nbsp;Kiran Raja,&nbsp;Raghavendra Ramachandra,&nbsp;Andreas Uhl","doi":"10.1049/2024/3236602","DOIUrl":"10.1049/2024/3236602","url":null,"abstract":"<p>Finger vein recognition is gaining popularity in the field of biometrics, yet the inter-operability of finger vein patterns has received limited attention. This study aims to fill this gap by introducing a cross-device finger vein dataset and evaluating the performance of finger vein recognition across devices using a classical method, a convolutional neural network, and our proposed patch-based convolutional auto-encoder (CAE). The findings emphasise the importance of standardisation of finger vein recognition, similar to that of fingerprints or irises, crucial for achieving inter-operability. Despite the inherent challenges of cross-device recognition, the proposed CAE architecture in this study demonstrates promising results in finger vein recognition, particularly in the context of cross-device comparisons.</p>","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":"2024 1","pages":""},"PeriodicalIF":1.8,"publicationDate":"2024-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/2024/3236602","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140381478","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning Deep Embedding with Acoustic and Phoneme Features for Speaker Recognition in FM Broadcasting 利用声学和音素特征学习深度嵌入,用于调频广播中的扬声器识别
IF 1.8 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-03-22 DOI: 10.1049/2024/6694481
Xiao Li, Xiao Chen, Rui Fu, Xiao Hu, Mintong Chen, Kun Niu

Text-independent speaker verification (TI-SV) is a crucial task in speaker recognition, as it involves verifying an individual’s claimed identity from speech of arbitrary content without any human intervention. The target for TI-SV is to design a discriminative network to learn deep speaker embedding for speaker idiosyncrasy. In this paper, we propose a deep speaker embedding learning approach of a hybrid deep neural network (DNN) for TI-SV in FM broadcasting. Not only acoustic features are utilized, but also phoneme features are introduced as prior knowledge to collectively learn deep speaker embedding. The hybrid DNN consists of a convolutional neural network architecture for generating acoustic features and a multilayer perceptron architecture for extracting phoneme features sequentially, which represent significant pronunciation attributes. The extracted acoustic and phoneme features are concatenated to form deep embedding descriptors for speaker identity. The hybrid DNN demonstrates not only the complementarity between acoustic and phoneme features but also the temporality of phoneme features in a sequence. Our experiments show that the hybrid DNN outperforms existing methods and delivers a remarkable performance in FM broadcasting TI-SV.

独立于文本的说话人验证(TI-SV)是说话人识别中的一项重要任务,因为它涉及在没有任何人工干预的情况下,从任意内容的语音中验证个人声称的身份。TI-SV 的目标是设计一个判别网络,以学习针对说话人特异性的深度说话人嵌入。在本文中,我们为调频广播中的 TI-SV 提出了一种混合深度神经网络(DNN)的深度说话者嵌入学习方法。不仅利用了声学特征,还引入了音素特征作为先验知识,以共同学习深度扬声器嵌入。混合 DNN 由用于生成声学特征的卷积神经网络架构和用于依次提取代表重要发音属性的音素特征的多层感知器架构组成。提取的声学特征和音素特征通过串联形成深度嵌入描述符,用于识别说话者。混合 DNN 不仅证明了声学特征和音素特征之间的互补性,还证明了音素特征在序列中的时间性。我们的实验表明,混合 DNN 优于现有的方法,在调频广播 TI-SV 中表现出色。
{"title":"Learning Deep Embedding with Acoustic and Phoneme Features for Speaker Recognition in FM Broadcasting","authors":"Xiao Li,&nbsp;Xiao Chen,&nbsp;Rui Fu,&nbsp;Xiao Hu,&nbsp;Mintong Chen,&nbsp;Kun Niu","doi":"10.1049/2024/6694481","DOIUrl":"10.1049/2024/6694481","url":null,"abstract":"<p>Text-independent speaker verification (TI-SV) is a crucial task in speaker recognition, as it involves verifying an individual’s claimed identity from speech of arbitrary content without any human intervention. The target for TI-SV is to design a discriminative network to learn deep speaker embedding for speaker idiosyncrasy. In this paper, we propose a deep speaker embedding learning approach of a hybrid deep neural network (DNN) for TI-SV in FM broadcasting. Not only acoustic features are utilized, but also phoneme features are introduced as prior knowledge to collectively learn deep speaker embedding. The hybrid DNN consists of a convolutional neural network architecture for generating acoustic features and a multilayer perceptron architecture for extracting phoneme features sequentially, which represent significant pronunciation attributes. The extracted acoustic and phoneme features are concatenated to form deep embedding descriptors for speaker identity. The hybrid DNN demonstrates not only the complementarity between acoustic and phoneme features but also the temporality of phoneme features in a sequence. Our experiments show that the hybrid DNN outperforms existing methods and delivers a remarkable performance in FM broadcasting TI-SV.</p>","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":"2024 1","pages":""},"PeriodicalIF":1.8,"publicationDate":"2024-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/2024/6694481","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140220402","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On the Potential of Algorithm Fusion for Demographic Bias Mitigation in Face Recognition 论算法融合在减轻人脸识别中的人口统计学偏差方面的潜力
IF 1.8 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-02-23 DOI: 10.1049/2024/1808587
Jascha Kolberg, Yannik Schäfer, Christian Rathgeb, Christoph Busch

With the rise of deep neural networks, the performance of biometric systems has increased tremendously. Biometric systems for face recognition are now used in everyday life, e.g., border control, crime prevention, or personal device access control. Although the accuracy of face recognition systems is generally high, they are not without flaws. Many biometric systems have been found to exhibit demographic bias, resulting in different demographic groups being not recognized with the same accuracy. This is especially true for facial recognition due to demographic factors, e.g., gender and skin color. While many previous works already reported demographic bias, this work aims to reduce demographic bias for biometric face recognition applications. In this regard, 12 face recognition systems are benchmarked regarding biometric recognition performance as well as demographic differentials, i.e., fairness. Subsequently, multiple fusion techniques are applied with the goal to improve the fairness in contrast to single systems. The experimental results show that it is possible to improve the fairness regarding single demographics, e.g., skin color or gender, while improving fairness for demographic subgroups turns out to be more challenging.

随着深度神经网络的兴起,生物识别系统的性能大幅提高。目前,人脸识别生物识别系统已广泛应用于日常生活中,如边境管制、预防犯罪或个人设备访问控制等。虽然人脸识别系统的准确率普遍较高,但也并非没有缺陷。许多生物识别系统被发现存在人口统计学偏差,导致不同人口群体的识别准确率不尽相同。由于性别和肤色等人口统计学因素,人脸识别尤其如此。尽管之前的许多工作已经报告了人口统计偏差,但这项工作旨在减少生物人脸识别应用中的人口统计偏差。为此,我们对 12 个人脸识别系统的生物识别性能以及人口统计学差异(即公平性)进行了基准测试。随后,应用了多种融合技术,目的是提高与单一系统相比的公平性。实验结果表明,提高单一人口统计学特征(如肤色或性别)的公平性是可能的,而提高人口亚群的公平性则更具挑战性。
{"title":"On the Potential of Algorithm Fusion for Demographic Bias Mitigation in Face Recognition","authors":"Jascha Kolberg,&nbsp;Yannik Schäfer,&nbsp;Christian Rathgeb,&nbsp;Christoph Busch","doi":"10.1049/2024/1808587","DOIUrl":"10.1049/2024/1808587","url":null,"abstract":"<p>With the rise of deep neural networks, the performance of biometric systems has increased tremendously. Biometric systems for face recognition are now used in everyday life, e.g., border control, crime prevention, or personal device access control. Although the accuracy of face recognition systems is generally high, they are not without flaws. Many biometric systems have been found to exhibit demographic bias, resulting in different demographic groups being not recognized with the same accuracy. This is especially true for facial recognition due to demographic factors, e.g., gender and skin color. While many previous works already reported demographic bias, this work aims to reduce demographic bias for biometric face recognition applications. In this regard, 12 face recognition systems are benchmarked regarding biometric recognition performance as well as demographic differentials, i.e., fairness. Subsequently, multiple fusion techniques are applied with the goal to improve the fairness in contrast to single systems. The experimental results show that it is possible to improve the fairness regarding single demographics, e.g., skin color or gender, while improving fairness for demographic subgroups turns out to be more challenging.</p>","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":"2024 1","pages":""},"PeriodicalIF":1.8,"publicationDate":"2024-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/2024/1808587","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140436576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Face Forgery Detection with Long-Range Noise Features and Multilevel Frequency-Aware Clues 利用长距离噪声特征和多级频率感知线索进行人脸伪造检测
IF 1.8 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-02-05 DOI: 10.1049/2024/6523854
Yi Zhao, Xin Jin, Song Gao, Liwen Wu, Shaowen Yao, Qian Jiang

The widespread dissemination of high-fidelity fake faces created by face forgery techniques has caused serious trust concerns and ethical issues in modern society. Consequently, face forgery detection has emerged as a prominent topic of research to prevent technology abuse. Although, most existing face forgery detectors demonstrate success when evaluating high-quality faces under intra-dataset scenarios, they often overfit manipulation-specific artifacts and lack robustness to postprocessing operations. In this work, we design an innovative dual-branch collaboration framework that leverages the strengths of the transformer and CNN to thoroughly dig into the multimodal forgery artifacts from both a global and local perspective. Specifically, a novel adaptive noise trace enhancement module (ANTEM) is proposed to remove high-level face content while amplifying more generalized forgery artifacts in the noise domain. Then, the transformer-based branch can track long-range noise features. Meanwhile, considering that subtle forgery artifacts could be described in the frequency domain even in a compression scenario, a multilevel frequency-aware module (MFAM) is developed and further applied to the CNN-based branch to extract complementary frequency-aware clues. Besides, we incorporate a collaboration strategy involving cross-entropy loss and single center loss to enhance the learning of more generalized representations by optimizing the fusion features of the dual branch. Extensive experiments on various benchmark datasets substantiate the superior generalization and robustness of our framework when compared to the competing approaches.

人脸伪造技术制造的高仿真假人脸广泛传播,在现代社会引发了严重的信任问题和伦理问题。因此,人脸伪造检测已成为防止技术滥用的一个突出研究课题。尽管大多数现有的人脸伪造检测器在数据集内场景下评估高质量人脸时都取得了成功,但它们往往会过度拟合特定的操纵伪造物,并且缺乏对后处理操作的鲁棒性。在这项工作中,我们设计了一个创新的双分支协作框架,利用变换器和 CNN 的优势,从全局和局部两个角度深入挖掘多模态伪造假象。具体来说,我们提出了一个新颖的自适应噪声痕量增强模块(ANTEM),用于去除高级人脸内容,同时放大噪声域中更广泛的伪造伪迹。然后,基于变压器的分支可以跟踪长距离噪声特征。同时,考虑到即使在压缩情况下,细微的伪造假象也可以在频域中得到描述,因此开发了多级频率感知模块(MFAM),并进一步应用于基于 CNN 的分支,以提取互补的频率感知线索。此外,我们还采用了涉及交叉熵损失和单中心损失的协作策略,通过优化双分支的融合特征来增强对更广义表征的学习。在各种基准数据集上进行的广泛实验证明,与其他竞争方法相比,我们的框架具有更优越的泛化能力和鲁棒性。
{"title":"Face Forgery Detection with Long-Range Noise Features and Multilevel Frequency-Aware Clues","authors":"Yi Zhao,&nbsp;Xin Jin,&nbsp;Song Gao,&nbsp;Liwen Wu,&nbsp;Shaowen Yao,&nbsp;Qian Jiang","doi":"10.1049/2024/6523854","DOIUrl":"10.1049/2024/6523854","url":null,"abstract":"<p>The widespread dissemination of high-fidelity fake faces created by face forgery techniques has caused serious trust concerns and ethical issues in modern society. Consequently, face forgery detection has emerged as a prominent topic of research to prevent technology abuse. Although, most existing face forgery detectors demonstrate success when evaluating high-quality faces under intra-dataset scenarios, they often overfit manipulation-specific artifacts and lack robustness to postprocessing operations. In this work, we design an innovative dual-branch collaboration framework that leverages the strengths of the transformer and CNN to thoroughly dig into the multimodal forgery artifacts from both a global and local perspective. Specifically, a novel adaptive noise trace enhancement module (ANTEM) is proposed to remove high-level face content while amplifying more generalized forgery artifacts in the noise domain. Then, the transformer-based branch can track long-range noise features. Meanwhile, considering that subtle forgery artifacts could be described in the frequency domain even in a compression scenario, a multilevel frequency-aware module (MFAM) is developed and further applied to the CNN-based branch to extract complementary frequency-aware clues. Besides, we incorporate a collaboration strategy involving cross-entropy loss and single center loss to enhance the learning of more generalized representations by optimizing the fusion features of the dual branch. Extensive experiments on various benchmark datasets substantiate the superior generalization and robustness of our framework when compared to the competing approaches.</p>","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":"2024 1","pages":""},"PeriodicalIF":1.8,"publicationDate":"2024-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/2024/6523854","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139862462","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Impact of Illumination on Finger Vascular Pattern Recognition 照明对手指血管模式识别的影响
IF 1.8 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-02-03 DOI: 10.1049/2024/4413655
Pesigrihastamadya Normakristagaluh, Geert J. Laanstra, Luuk J. Spreeuwers, Raymond N. J. Veldhuis

This paper studies the impact of illumination direction and bundle width on finger vascular pattern imaging and recognition performance. A qualitative theoretical model is presented to explain the projection of finger blood vessels on the skin. A series of experiments were conducted using a scanner of our design with illumination from the top, a single-direction side (left or right), and narrow or wide beams. A new dataset was collected for the experiments, containing 4,428 NIR images of finger vein patterns captured under well-controlled conditions to minimize position and rotation angle differences between different sessions. Top illumination performs well because of more homogenous, which enhances a larger number of visible veins. Narrower bundles of light do not affect which veins are visible, but they reduce the overexposure at finger boundaries and increase the quality of vascular pattern images. The narrow beam achieves the best performance with 0% of [email protected]%, and the wide beam consistently results in a higher false nonmatch rate. The comparison of left- and right-side illumination has the highest error rates because only the veins in the middle of the finger are visible in both images. Different directional illumination may be interoperable since they produce the same vascular pattern and principally are the projected shadows on the finger surface. Score and image fusion for right- and left-side result in recognition performance similar to that obtained with top illumination, indicating the vein patterns are independent of illumination direction. All results of these experiments support the proposed model.

本文研究了光照方向和光束宽度对手指血管图案成像和识别性能的影响。本文提出了一个定性理论模型来解释手指血管在皮肤上的投影。使用我们设计的扫描仪进行了一系列实验,实验中采用了从顶部、单方向(左侧或右侧)、窄光束或宽光束进行照明。实验收集了一个新的数据集,其中包含 4428 张近红外图像,这些图像是在控制良好的条件下采集的手指静脉图案,以最大限度地减少不同环节之间的位置和旋转角度差异。顶部照明表现良好,因为它更均匀,能增强更多的可见静脉。较窄的光束不会影响哪些静脉可见,但会减少手指边界的过度曝光,提高血管图案图像的质量。窄光束的性能最佳,FNMR@FMR0.01%,而宽光束则始终导致较高的错误不匹配率。左侧和右侧照明对比的错误率最高,因为在两幅图像中都只能看到手指中间的静脉。不同方向的照明可能是互通的,因为它们产生相同的血管模式,主要是手指表面的投影阴影。右侧和左侧的评分和图像融合结果与顶部照明下的识别性能相似,表明静脉图案与照明方向无关。所有这些实验结果都支持所提出的模型。
{"title":"The Impact of Illumination on Finger Vascular Pattern Recognition","authors":"Pesigrihastamadya Normakristagaluh,&nbsp;Geert J. Laanstra,&nbsp;Luuk J. Spreeuwers,&nbsp;Raymond N. J. Veldhuis","doi":"10.1049/2024/4413655","DOIUrl":"10.1049/2024/4413655","url":null,"abstract":"<p>This paper studies the impact of illumination direction and bundle width on finger vascular pattern imaging and recognition performance. A qualitative theoretical model is presented to explain the projection of finger blood vessels on the skin. A series of experiments were conducted using a scanner of our design with illumination from the top, a single-direction side (left or right), and narrow or wide beams. A new dataset was collected for the experiments, containing 4,428 NIR images of finger vein patterns captured under well-controlled conditions to minimize position and rotation angle differences between different sessions. Top illumination performs well because of more homogenous, which enhances a larger number of visible veins. Narrower bundles of light do not affect which veins are visible, but they reduce the overexposure at finger boundaries and increase the quality of vascular pattern images. The narrow beam achieves the best performance with 0% of [email protected]%, and the wide beam consistently results in a higher false nonmatch rate. The comparison of left- and right-side illumination has the highest error rates because only the veins in the middle of the finger are visible in both images. Different directional illumination may be interoperable since they produce the same vascular pattern and principally are the projected shadows on the finger surface. Score and image fusion for right- and left-side result in recognition performance similar to that obtained with top illumination, indicating the vein patterns are independent of illumination direction. All results of these experiments support the proposed model.</p>","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":"2024 1","pages":""},"PeriodicalIF":1.8,"publicationDate":"2024-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/2024/4413655","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139867791","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Impact of Illumination on Finger Vascular Pattern Recognition 照明对手指血管模式识别的影响
IF 2 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-02-03 DOI: 10.1049/2024/4413655
Pesigrihastamadya Normakristagaluh, Geert J. Laanstra, Luuk J. Spreeuwers, Raymond N. J. Veldhuis

This paper studies the impact of illumination direction and bundle width on finger vascular pattern imaging and recognition performance. A qualitative theoretical model is presented to explain the projection of finger blood vessels on the skin. A series of experiments were conducted using a scanner of our design with illumination from the top, a single-direction side (left or right), and narrow or wide beams. A new dataset was collected for the experiments, containing 4,428 NIR images of finger vein patterns captured under well-controlled conditions to minimize position and rotation angle differences between different sessions. Top illumination performs well because of more homogenous, which enhances a larger number of visible veins. Narrower bundles of light do not affect which veins are visible, but they reduce the overexposure at finger boundaries and increase the quality of vascular pattern images. The narrow beam achieves the best performance with 0% of [email protected]%, and the wide beam consistently results in a higher false nonmatch rate. The comparison of left- and right-side illumination has the highest error rates because only the veins in the middle of the finger are visible in both images. Different directional illumination may be interoperable since they produce the same vascular pattern and principally are the projected shadows on the finger surface. Score and image fusion for right- and left-side result in recognition performance similar to that obtained with top illumination, indicating the vein patterns are independent of illumination direction. All results of these experiments support the proposed model.

本文研究了光照方向和光束宽度对手指血管图案成像和识别性能的影响。本文提出了一个定性理论模型来解释手指血管在皮肤上的投影。使用我们设计的扫描仪进行了一系列实验,实验中采用了从顶部、单方向(左侧或右侧)、窄光束或宽光束进行照明。实验收集了一个新的数据集,其中包含 4428 张近红外图像,这些图像是在控制良好的条件下采集的手指静脉图案,以最大限度地减少不同环节之间的位置和旋转角度差异。顶部照明表现良好,因为它更均匀,能增强更多的可见静脉。较窄的光束不会影响哪些静脉可见,但会减少手指边界的过度曝光,提高血管图案图像的质量。窄光束的性能最佳,FNMR@FMR0.01%,而宽光束则始终导致较高的错误不匹配率。左侧和右侧照明对比的错误率最高,因为在两幅图像中都只能看到手指中间的静脉。不同方向的照明可能是互通的,因为它们产生相同的血管模式,主要是手指表面的投影阴影。右侧和左侧的评分和图像融合结果与顶部照明下的识别性能相似,表明静脉图案与照明方向无关。所有这些实验结果都支持所提出的模型。
{"title":"The Impact of Illumination on Finger Vascular Pattern Recognition","authors":"Pesigrihastamadya Normakristagaluh,&nbsp;Geert J. Laanstra,&nbsp;Luuk J. Spreeuwers,&nbsp;Raymond N. J. Veldhuis","doi":"10.1049/2024/4413655","DOIUrl":"10.1049/2024/4413655","url":null,"abstract":"<div>\u0000 <p>This paper studies the impact of illumination direction and bundle width on finger vascular pattern imaging and recognition performance. A qualitative theoretical model is presented to explain the projection of finger blood vessels on the skin. A series of experiments were conducted using a scanner of our design with illumination from the top, a single-direction side (left or right), and narrow or wide beams. A new dataset was collected for the experiments, containing 4,428 NIR images of finger vein patterns captured under well-controlled conditions to minimize position and rotation angle differences between different sessions. Top illumination performs well because of more homogenous, which enhances a larger number of visible veins. Narrower bundles of light do not affect which veins are visible, but they reduce the overexposure at finger boundaries and increase the quality of vascular pattern images. The narrow beam achieves the best performance with 0% of [email protected]%, and the wide beam consistently results in a higher false nonmatch rate. The comparison of left- and right-side illumination has the highest error rates because only the veins in the middle of the finger are visible in both images. Different directional illumination may be interoperable since they produce the same vascular pattern and principally are the projected shadows on the finger surface. Score and image fusion for right- and left-side result in recognition performance similar to that obtained with top illumination, indicating the vein patterns are independent of illumination direction. All results of these experiments support the proposed model.</p>\u0000 </div>","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":"2024 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/2024/4413655","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139807886","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Impact of Occlusion Masks on Gender Classification from Iris Texture 遮挡蒙版对根据虹膜纹理进行性别分类的影响
IF 1.8 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-01-27 DOI: 10.1049/2024/8526857
Claudio Yáñez, Juan E. Tapia, Claudio A. Perez, Christoph Busch

Gender classification on normalized iris images has been previously attempted with varying degrees of success. In these previous studies, it has been shown that occlusion masks may introduce gender information; occlusion masks are used in iris recognition to remove non-iris elements. When, the goal is to classify the gender using exclusively the iris texture, the presence of gender information in the masks may result in apparently higher accuracy, thereby not reflecting the actual gender information present in the iris. However, no measures have been taken to eliminate this information while preserving as much iris information as possible. We propose a novel method to assess the gender information present in the iris more accurately by eliminating gender information in the masks. This consists of pairing iris with similar masks and different gender, generating a paired mask using the OR operator, and applying this mask to the iris. Additionally, we manually fix iris segmentation errors to study their impact on the gender classification. Our results show that occlusion masks can account for 6.92% of the gender classification accuracy on average. Therefore, works aiming to perform gender classification using the iris texture from normalized iris images should eliminate this correlation.

以前曾尝试过对归一化虹膜图像进行性别分类,并取得了不同程度的成功。之前的研究表明,遮挡蒙版可能会引入性别信息;遮挡蒙版在虹膜识别中用于去除非虹膜元素。当目标是仅使用虹膜纹理进行性别分类时,遮挡蒙版中的性别信息可能会导致表面上更高的准确率,从而无法反映虹膜中的实际性别信息。然而,目前还没有采取任何措施来消除这些信息,同时尽可能多地保留虹膜信息。我们提出了一种新方法,通过消除面具中的性别信息来更准确地评估虹膜中的性别信息。这包括将具有相似掩码和不同性别的虹膜配对,使用 OR 运算符生成配对掩码,并将此掩码应用于虹膜。此外,我们还手动修正了虹膜分割错误,以研究其对性别分类的影响。我们的结果表明,闭塞掩码平均可影响 6.92% 的性别分类准确率。因此,旨在利用归一化虹膜图像中的虹膜纹理进行性别分类的工作应消除这种相关性。
{"title":"Impact of Occlusion Masks on Gender Classification from Iris Texture","authors":"Claudio Yáñez,&nbsp;Juan E. Tapia,&nbsp;Claudio A. Perez,&nbsp;Christoph Busch","doi":"10.1049/2024/8526857","DOIUrl":"10.1049/2024/8526857","url":null,"abstract":"<p>Gender classification on normalized iris images has been previously attempted with varying degrees of success. In these previous studies, it has been shown that occlusion masks may introduce gender information; occlusion masks are used in iris recognition to remove non-iris elements. When, the goal is to classify the gender using exclusively the iris texture, the presence of gender information in the masks may result in apparently higher accuracy, thereby not reflecting the actual gender information present in the iris. However, no measures have been taken to eliminate this information while preserving as much iris information as possible. We propose a novel method to assess the gender information present in the iris more accurately by eliminating gender information in the masks. This consists of pairing iris with similar masks and different gender, generating a paired mask using the OR operator, and applying this mask to the iris. Additionally, we manually fix iris segmentation errors to study their impact on the gender classification. Our results show that occlusion masks can account for 6.92% of the gender classification accuracy on average. Therefore, works aiming to perform gender classification using the iris texture from normalized iris images should eliminate this correlation.</p>","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":"2024 1","pages":""},"PeriodicalIF":1.8,"publicationDate":"2024-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/2024/8526857","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140492836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Noncontact Palm Vein ROI Extraction Based on Improved Lightweight HRnet in Complex Backgrounds 复杂背景下基于改进的轻量级 HRnet 的非接触式手掌静脉 ROI 提取
IF 1.8 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-01-17 DOI: 10.1049/2024/4924184
Fen Dai, Ziyang Wang, Xiangqun Zou, Rongwen Zhang, Xiaoling Deng

The extraction of ROI (region of interest) was a key step in noncontact palm vein recognition, which was crucial for the subsequent feature extraction and feature matching. A noncontact palm vein ROI extraction algorithm based on the improved HRnet for keypoints localization was proposed for dealing with hand gesture irregularities, translation, scaling, and rotation in complex backgrounds. To reduce the computation time and model size for ultimate deploying in low-cost embedded systems, this improved HRnet was designed to be lightweight by reconstructing the residual block structure and adopting depth-separable convolution, which greatly reduced the model size and improved the inference speed of network forward propagation. Next, the palm vein ROI localization and palm vein recognition are processed in self-built dataset and two public datasets (CASIA and TJU-PV). The proposed improved HRnet algorithm achieved 97.36% accuracy for keypoints detection on self-built palm vein dataset and 98.23% and 98.74% accuracy for keypoints detection on two public palm vein datasets (CASIA and TJU-PV), respectively. The model size was only 0.45 M, and on a CPU with a clock speed of 3 GHz, the average running time of ROI extraction for one image was 0.029 s. Based on the keypoints and corresponding ROI extraction, the equal error rate (EER) of palm vein recognition was 0.000362%, 0.014541%, and 0.005951% and the false nonmatch rate was 0.000001%, 11.034725%, and 4.613714% (false match rate: 0.01%) in the self-built dataset, TJU-PV, and CASIA, respectively. The experimental result showed that the proposed algorithm was feasible and effective and provided a reliable experimental basis for the research of palm vein recognition technology.

ROI(感兴趣区域)的提取是非接触式手掌静脉识别的关键步骤,对于后续的特征提取和特征匹配至关重要。本文提出了一种基于改进的关键点定位 HRnet 的非接触式手掌静脉 ROI 提取算法,用于处理复杂背景下的手势不规则性、平移、缩放和旋转等问题。为了减少计算时间和模型大小,以便最终部署到低成本嵌入式系统中,该改进型 HRnet 通过重构残差块结构和采用深度分离卷积实现了轻量级设计,从而大大减少了模型大小,提高了网络前向传播的推理速度。接下来,在自建数据集和两个公共数据集(CASIA和TJU-PV)中处理了掌静脉ROI定位和掌静脉识别。改进后的 HRnet 算法在自建手掌静脉数据集上的关键点检测准确率达到 97.36%,在两个公共手掌静脉数据集(CASIA 和 TJU-PV)上的关键点检测准确率分别达到 98.23% 和 98.74%。模型大小仅为 0.45 M,在主频为 3 GHz 的 CPU 上,一幅图像的 ROI 提取平均运行时间为 0.029 s。根据关键点和相应的 ROI 提取结果,在自建数据集、TJU-PV 和 CASIA 中,手掌静脉识别的平均错误率(EER)分别为 0.000362%、0.014541% 和 0.005951%,错误不匹配率分别为 0.000001%、11.034725% 和 4.613714%(错误匹配率:0.01%)。实验结果表明,所提出的算法是可行的、有效的,为掌静脉识别技术的研究提供了可靠的实验依据。
{"title":"Noncontact Palm Vein ROI Extraction Based on Improved Lightweight HRnet in Complex Backgrounds","authors":"Fen Dai,&nbsp;Ziyang Wang,&nbsp;Xiangqun Zou,&nbsp;Rongwen Zhang,&nbsp;Xiaoling Deng","doi":"10.1049/2024/4924184","DOIUrl":"10.1049/2024/4924184","url":null,"abstract":"<p>The extraction of ROI (region of interest) was a key step in noncontact palm vein recognition, which was crucial for the subsequent feature extraction and feature matching. A noncontact palm vein ROI extraction algorithm based on the improved HRnet for keypoints localization was proposed for dealing with hand gesture irregularities, translation, scaling, and rotation in complex backgrounds. To reduce the computation time and model size for ultimate deploying in low-cost embedded systems, this improved HRnet was designed to be lightweight by reconstructing the residual block structure and adopting depth-separable convolution, which greatly reduced the model size and improved the inference speed of network forward propagation. Next, the palm vein ROI localization and palm vein recognition are processed in self-built dataset and two public datasets (CASIA and TJU-PV). The proposed improved HRnet algorithm achieved 97.36% accuracy for keypoints detection on self-built palm vein dataset and 98.23% and 98.74% accuracy for keypoints detection on two public palm vein datasets (CASIA and TJU-PV), respectively. The model size was only 0.45 M, and on a CPU with a clock speed of 3 GHz, the average running time of ROI extraction for one image was 0.029 s. Based on the keypoints and corresponding ROI extraction, the equal error rate (EER) of palm vein recognition was 0.000362%, 0.014541%, and 0.005951% and the false nonmatch rate was 0.000001%, 11.034725%, and 4.613714% (false match rate: 0.01%) in the self-built dataset, TJU-PV, and CASIA, respectively. The experimental result showed that the proposed algorithm was feasible and effective and provided a reliable experimental basis for the research of palm vein recognition technology.</p>","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":"2024 1","pages":""},"PeriodicalIF":1.8,"publicationDate":"2024-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/2024/4924184","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139526814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IET Biometrics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1