首页 > 最新文献

IET Biometrics最新文献

英文 中文
A DeepConvLSTM Approach for Continuous Authentication Using Operational System Performance Counters 基于操作系统性能计数器的深度卷积stm连续认证方法
IF 1.8 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-08-26 DOI: 10.1049/bme2/8262252
César H. G. Andrade, Hendrio L. S. Bragança, Horácio Fernandes, Eduardo Feitosa, Eduardo Souto

Authentication in personal and corporate computer systems predominantly relies on login and password credentials, which are vulnerable to unauthorized access, especially when genuine users leave their devices unlocked. To address this issue, continuous authentication (CA) systems based on behavioral biometrics have gained attention. Traditional CA models leverage user–device interactions, such as mouse movements, typing dynamics, and speech recognition. This paper introduces a novel approach that utilizes system performance counters—attributes such as memory usage, CPU load, and network activity—collected passively by operating systems (OSs), to develop a robust and low-intrusive authentication mechanism. Our method employs a deep network architecture combining convolutional neural networks (CNNs) with long short-term memory (LSTM) layers to analyze temporal patterns and identify unique user behaviors. Unlike traditional methods, performance counters capture subtle system-level usage patterns that are harder to mimic, enhancing security and resilience to attacks. We integrate a trust model into the CA framework to balance security and usability by avoiding interruptions for genuine users while blocking impostors in real-time. We evaluate our approach using two new datasets, COUNT-SO-I (26 users) and COUNT-SO-II (37 users), collected in real-world scenarios without specific task constraints. Our results demonstrate the feasibility and effectiveness of the proposed method, achieving 99% detection accuracy (ACC) for impostor users within an average of 17.2 s, while maintaining seamless user experiences. These findings highlight the potential of performance counter–based CA systems for practical applications, such as safeguarding sensitive systems in corporate, governmental, and personal environments.

个人和企业计算机系统中的身份验证主要依赖于登录和密码凭据,这很容易受到未经授权的访问,特别是当真正的用户未锁定设备时。为了解决这一问题,基于行为生物识别的连续身份验证(CA)系统引起了人们的关注。传统的CA模型利用用户-设备交互,例如鼠标移动、输入动态和语音识别。本文介绍了一种新的方法,该方法利用系统性能计数器——由操作系统(os)被动收集的内存使用、CPU负载和网络活动等属性——来开发一种健壮且低侵入性的身份验证机制。我们的方法采用卷积神经网络(cnn)与长短期记忆(LSTM)层相结合的深度网络架构来分析时间模式并识别独特的用户行为。与传统方法不同,性能计数器捕捉难以模仿的微妙的系统级使用模式,从而增强了安全性和抵御攻击的弹性。我们将信任模型集成到CA框架中,通过避免真正用户的中断,同时实时阻止冒名顶替者,来平衡安全性和可用性。我们使用两个新的数据集,COUNT-SO-I(26个用户)和COUNT-SO-II(37个用户)来评估我们的方法,这些数据集收集于没有特定任务约束的真实场景中。我们的研究结果证明了该方法的可行性和有效性,在平均17.2秒内实现了99%的冒名顶替用户检测准确率(ACC),同时保持了无缝的用户体验。这些发现突出了基于性能计数器的CA系统在实际应用中的潜力,例如保护企业、政府和个人环境中的敏感系统。
{"title":"A DeepConvLSTM Approach for Continuous Authentication Using Operational System Performance Counters","authors":"César H. G. Andrade,&nbsp;Hendrio L. S. Bragança,&nbsp;Horácio Fernandes,&nbsp;Eduardo Feitosa,&nbsp;Eduardo Souto","doi":"10.1049/bme2/8262252","DOIUrl":"10.1049/bme2/8262252","url":null,"abstract":"<p>Authentication in personal and corporate computer systems predominantly relies on login and password credentials, which are vulnerable to unauthorized access, especially when genuine users leave their devices unlocked. To address this issue, continuous authentication (CA) systems based on behavioral biometrics have gained attention. Traditional CA models leverage user–device interactions, such as mouse movements, typing dynamics, and speech recognition. This paper introduces a novel approach that utilizes system performance counters—attributes such as memory usage, CPU load, and network activity—collected passively by operating systems (OSs), to develop a robust and low-intrusive authentication mechanism. Our method employs a deep network architecture combining convolutional neural networks (CNNs) with long short-term memory (LSTM) layers to analyze temporal patterns and identify unique user behaviors. Unlike traditional methods, performance counters capture subtle system-level usage patterns that are harder to mimic, enhancing security and resilience to attacks. We integrate a trust model into the CA framework to balance security and usability by avoiding interruptions for genuine users while blocking impostors in real-time. We evaluate our approach using two new datasets, COUNT-SO-I (26 users) and COUNT-SO-II (37 users), collected in real-world scenarios without specific task constraints. Our results demonstrate the feasibility and effectiveness of the proposed method, achieving 99% detection accuracy (ACC) for impostor users within an average of 17.2 s, while maintaining seamless user experiences. These findings highlight the potential of performance counter–based CA systems for practical applications, such as safeguarding sensitive systems in corporate, governmental, and personal environments.</p>","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":"2025 1","pages":""},"PeriodicalIF":1.8,"publicationDate":"2025-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/bme2/8262252","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144897286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Dynamic Interactive Fusion Model for Extracting Fatigue Features Based on the Audiovisual Data Flow of Air Traffic Controllers 基于空中交通管制员视听数据流的疲劳特征提取动态交互融合模型
IF 1.8 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-08-22 DOI: 10.1049/bme2/7626919
Zhiyuan Shen, Xueyan Li, Junqi Bai, Kai Wang, Yifan Xu

Fatigue among air traffic controllers is a factor contributing to civil aviation crashes. Existing methods for extracting and fuzing fatigue features encounter two main challenges: (1) the low accuracy of traditional single-mode fatigue recognition methods, and (2) disregarding multimodal data correlations in traditional multimodal methods for feature concatenation and fusion. This paper proposes an interactive algorithm for the fusion and recognition of multimode fatigue features that combines multihead attention (MHA) and cross-attention (XATTN) which are based on an improved speech and facial fatigue recognition model. First, an improved conformer model which combines a convolutional module with a transformer encoder is proposed using the radiotelephony communication data of controllers by employing the filter bank method for extracting profound speech features. Second, facial data of controllers are processed via pointwise convolutions employing a stack of inverted residual layers, which facilitates the extraction of facial features. Third, speech and facial features are fuzed interactively by combining MHA and XATTN, which achieves high accuracy of recognizing the fatigue state of controllers working in complex operational environments. A series of experiments were conducted with audiovisual data sets collected from actual air traffic control (ATC) missions. Comparing with four competing methods for fuzing multimodal features, the experimental results indicate that the proposed method for fuzing multimode features achieved a recognition accuracy of 99.2%, which was 8.9% higher than that for a speech single-mode model and 0.4% higher than that for a facial single-mode model.

空中交通管制员的疲劳是导致民航事故的一个因素。现有的疲劳特征提取和融合方法面临两个主要挑战:(1)传统的单模态疲劳识别方法精度低;(2)传统的多模态特征拼接和融合方法忽略了多模态数据的相关性。基于改进的语音和面部疲劳识别模型,提出了一种将多头注意(MHA)和交叉注意(XATTN)相结合的多模疲劳特征融合与识别的交互式算法。首先,利用控制器的无线电话通信数据,采用滤波器组方法提取深度语音特征,提出了一种将卷积模块与变压器编码器相结合的改进的共形器模型;其次,对控制器的人脸数据进行点向卷积处理,采用一叠反向残差层,便于人脸特征的提取;第三,结合MHA和XATTN,将语音和面部特征进行交互融合,实现了复杂作战环境下控制器疲劳状态识别的高精度。利用实际空中交通管制(ATC)任务中收集的视听数据集进行了一系列实验。实验结果表明,与4种相互竞争的多模态特征融合方法相比,本文方法的识别准确率达到99.2%,比语音单模模型的识别准确率提高8.9%,比面部单模模型的识别准确率提高0.4%。
{"title":"A Dynamic Interactive Fusion Model for Extracting Fatigue Features Based on the Audiovisual Data Flow of Air Traffic Controllers","authors":"Zhiyuan Shen,&nbsp;Xueyan Li,&nbsp;Junqi Bai,&nbsp;Kai Wang,&nbsp;Yifan Xu","doi":"10.1049/bme2/7626919","DOIUrl":"10.1049/bme2/7626919","url":null,"abstract":"<p>Fatigue among air traffic controllers is a factor contributing to civil aviation crashes. Existing methods for extracting and fuzing fatigue features encounter two main challenges: (1) the low accuracy of traditional single-mode fatigue recognition methods, and (2) disregarding multimodal data correlations in traditional multimodal methods for feature concatenation and fusion. This paper proposes an interactive algorithm for the fusion and recognition of multimode fatigue features that combines multihead attention (MHA) and cross-attention (XATTN) which are based on an improved speech and facial fatigue recognition model. First, an improved conformer model which combines a convolutional module with a transformer encoder is proposed using the radiotelephony communication data of controllers by employing the filter bank method for extracting profound speech features. Second, facial data of controllers are processed via pointwise convolutions employing a stack of inverted residual layers, which facilitates the extraction of facial features. Third, speech and facial features are fuzed interactively by combining MHA and XATTN, which achieves high accuracy of recognizing the fatigue state of controllers working in complex operational environments. A series of experiments were conducted with audiovisual data sets collected from actual air traffic control (ATC) missions. Comparing with four competing methods for fuzing multimodal features, the experimental results indicate that the proposed method for fuzing multimode features achieved a recognition accuracy of 99.2%, which was 8.9% higher than that for a speech single-mode model and 0.4% higher than that for a facial single-mode model.</p>","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":"2025 1","pages":""},"PeriodicalIF":1.8,"publicationDate":"2025-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/bme2/7626919","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144891645","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FingerUNeSt++: Improving Fingertip Segmentation in Contactless Fingerprint Imaging Using Deep Learning 使用深度学习改进非接触式指纹成像中的指尖分割
IF 1.8 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-07-19 DOI: 10.1049/bme2/9982355
Laurenz Ruzicka, Bernhard Kohn, Clemens Heitzinger

Biometric identification systems, particularly those utilizing fingerprints, have become essential as a means of authenticating users due to their reliability and uniqueness. The recent shift towards contactless fingerprint sensors requires precise fingertip segmentation with changing backgrounds, to maintain high accuracy. This study introduces a novel deep learning model combining ResNeSt and UNet++ architectures called FingerUNeSt++, aimed at improving segmentation accuracy and inference speed for contactless fingerprint images. Our model significantly outperforms traditional and state-of-the-art methods, achieving superior performance metrics. Extensive data augmentation and an optimized model architecture contribute to its robustness and efficiency. This advancement holds promise for enhancing the effectiveness of contactless biometric systems in diverse real-world applications.

生物特征识别系统,特别是利用指纹的系统,由于其可靠性和唯一性,已成为验证用户身份的重要手段。最近转向非接触式指纹传感器需要精确的指尖分割随着背景的变化,以保持高精度。为了提高非接触式指纹图像的分割精度和推理速度,本研究引入了一种结合ResNeSt和une++架构的新型深度学习模型fingerune++。我们的模型明显优于传统和最先进的方法,实现了卓越的性能指标。广泛的数据扩充和优化的模型架构增强了其鲁棒性和效率。这一进步有望提高非接触式生物识别系统在各种实际应用中的有效性。
{"title":"FingerUNeSt++: Improving Fingertip Segmentation in Contactless Fingerprint Imaging Using Deep Learning","authors":"Laurenz Ruzicka,&nbsp;Bernhard Kohn,&nbsp;Clemens Heitzinger","doi":"10.1049/bme2/9982355","DOIUrl":"10.1049/bme2/9982355","url":null,"abstract":"<p>Biometric identification systems, particularly those utilizing fingerprints, have become essential as a means of authenticating users due to their reliability and uniqueness. The recent shift towards contactless fingerprint sensors requires precise fingertip segmentation with changing backgrounds, to maintain high accuracy. This study introduces a novel deep learning model combining ResNeSt and UNet++ architectures called FingerUNeSt++, aimed at improving segmentation accuracy and inference speed for contactless fingerprint images. Our model significantly outperforms traditional and state-of-the-art methods, achieving superior performance metrics. Extensive data augmentation and an optimized model architecture contribute to its robustness and efficiency. This advancement holds promise for enhancing the effectiveness of contactless biometric systems in diverse real-world applications.</p>","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":"2025 1","pages":""},"PeriodicalIF":1.8,"publicationDate":"2025-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/bme2/9982355","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144662898","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deepfake Video Traceability and Authentication via Source Attribution Deepfake视频可追溯性和来源归属认证
IF 1.8 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-07-13 DOI: 10.1049/bme2/5687970
Canghai Shi, Minglei Qiao, Zhuang Li, Zahid Akhtar, Bin Wang, Meng Han, Tong Qiao

In recent years, deepfake videos have emerged as a significant threat to societal and cybersecurity landscapes. Artificial intelligence (AI) techniques are used to create convincing deepfakes. The main counter method is deepfake detection. Currently, most of the mainstream detectors are based on deep neural networks. Such deep learning detection frameworks often face several problems that need to be addressed, for example, dependence on large-annotated datasets, lack of interpretability, and limited attention to source traceability. Towards overcoming these limitations, in this paper, we propose a novel training-free deepfake detection framework based on the interpretable inherent source attribution. The proposed framework not only distinguishes between real and fake videos but also traces their origins using camera fingerprints. Moreover, we have also constructed a new deepfake video dataset from 10 distinct camera devices. Experimental evaluations on multiple datasets show that the proposed method can attain high detection accuracies (ACCs) comparable to state-of-the-art (SOTA) deep learning techniques and also has superior traceability capabilities. This framework provides a robust and efficient solution for deepfake video authentication and source attribution, thus, making it highly adaptable to real-world scenarios.

近年来,深度假视频已经成为社会和网络安全领域的重大威胁。人工智能(AI)技术被用来制作令人信服的深度伪造。主要的对抗方法是深度伪造检测。目前,大多数主流的检测器都是基于深度神经网络的。这种深度学习检测框架通常面临几个需要解决的问题,例如,对大型注释数据集的依赖,缺乏可解释性,以及对源可追溯性的关注有限。为了克服这些限制,本文提出了一种新的基于可解释的固有源属性的无训练深度假检测框架。该框架不仅可以区分真假视频,还可以利用相机指纹追踪视频的来源。此外,我们还从10个不同的相机设备构建了一个新的deepfake视频数据集。在多个数据集上的实验评估表明,该方法可以获得与最先进的深度学习技术相当的高检测精度(ACCs),并且具有优越的可追溯能力。该框架为深度伪造视频认证和来源归属提供了一个强大而高效的解决方案,从而使其高度适应现实世界的场景。
{"title":"Deepfake Video Traceability and Authentication via Source Attribution","authors":"Canghai Shi,&nbsp;Minglei Qiao,&nbsp;Zhuang Li,&nbsp;Zahid Akhtar,&nbsp;Bin Wang,&nbsp;Meng Han,&nbsp;Tong Qiao","doi":"10.1049/bme2/5687970","DOIUrl":"10.1049/bme2/5687970","url":null,"abstract":"<p>In recent years, deepfake videos have emerged as a significant threat to societal and cybersecurity landscapes. Artificial intelligence (AI) techniques are used to create convincing deepfakes. The main counter method is deepfake detection. Currently, most of the mainstream detectors are based on deep neural networks. Such deep learning detection frameworks often face several problems that need to be addressed, for example, dependence on large-annotated datasets, lack of interpretability, and limited attention to source traceability. Towards overcoming these limitations, in this paper, we propose a novel training-free deepfake detection framework based on the interpretable inherent source attribution. The proposed framework not only distinguishes between real and fake videos but also traces their origins using camera fingerprints. Moreover, we have also constructed a new deepfake video dataset from 10 distinct camera devices. Experimental evaluations on multiple datasets show that the proposed method can attain high detection accuracies (ACCs) comparable to state-of-the-art (SOTA) deep learning techniques and also has superior traceability capabilities. This framework provides a robust and efficient solution for deepfake video authentication and source attribution, thus, making it highly adaptable to real-world scenarios.</p>","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":"2025 1","pages":""},"PeriodicalIF":1.8,"publicationDate":"2025-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/bme2/5687970","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144615060","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Dermatoglyphic Study of Primary Fingerprints Pattern in Relation to Gender and Blood Group Among Residents of Kathmandu Valley, Nepal 尼泊尔加德满都谷地居民主要指纹纹型与性别、血型的关系研究
IF 1.8 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-06-20 DOI: 10.1049/bme2/9993120
Sushma Paudel, Sushmita Paudel, Samikshya Kafle

Fingerprints are unique biometric identifiers that reflect intricate genetic and environmental/physiological influences. Beyond their forensic significance, they can offer insights into physiological traits like blood groups and gender, which can help in forensic analysis to narrow down the search. This exploratory study aims to identify potential associations between fingerprint patterns, gender, and blood groups within a defined regional cohort in Kathmandu, Nepal. This preliminary study included 290 students (144 males and 146 females) from Himalayan Whitehouse International College (HWIC). Fingerprint patterns (loops, whorls, and arches) were analyzed and compared with participants’ ABO-Rh blood groups. Statistical analyses, including chi-square tests, were used to determine associations and trends. Loops emerged as the most common fingerprint pattern (57.14%), followed by whorls (35%), and arches (7.86%). Blood group B+ve was the most prevalent (33.1%) among the study population in Kathmandu. The significant association between gender and fingerprint pattern was observed. The gender analysis revealed that loops were predominant in females, while males showed a higher frequency of whorls. While no significant relationship was observed between ABO blood groups and fingerprint patterns, a strong association was found between fingerprint patterns and Rh factor (p = 0.0496). Loops were more prevalent among Rh-positive (Rh+ve) individuals, while whorls were more common among Rh-negative (Rh−ve) individuals. Additionally, specific fingers were observed to have distinct fingerprint patterns more frequently. Arches were most prevalent in the index finger of both hands, loops were most abundant in both pinky finger, and left middle finger. Whorls were most frequently observed in ring finger of both hands and right thumb. The findings reinforce global patterns of blood group and fingerprint distribution, where Rh+ve individuals represent the majority and loops are most dominant fingerprint pattern. The gender-specific trends suggest the nuanced interplay of genetics, with females displaying a higher frequency of loops and males showing more whorls. Similarly, some blood group are more likely to exhibit a specific set of fingerprint patterns. This research clearly shows the gender-based differences and influence of genetic factors on fingerprint patterns, particularly the Rh factor. These findings contribute to the growing field of dermatoglyphics, with implications for forensic science, and population genetics.

指纹是独特的生物特征标识,反映了复杂的遗传和环境/生理影响。除了具有法医意义外,它们还可以提供对血型和性别等生理特征的见解,这有助于法医分析缩小搜索范围。本探索性研究旨在确定指纹模式、性别和血型在尼泊尔加德满都一个特定区域队列中的潜在关联。本初步研究包括喜玛拉雅怀特豪斯国际学院290名学生(男144名,女146名)。指纹图案(环状、螺旋和拱形)被分析并与参与者的ABO-Rh血型进行比较。统计分析,包括卡方检验,用于确定关联和趋势。环状指纹是最常见的指纹模式(57.14%),其次是螺旋(35%)和拱形(7.86%)。B+ve血型在加德满都的研究人群中最为普遍(33.1%)。性别与指纹图谱有显著的相关性。性别分析显示,环在雌性中占主导地位,而雄性则显示出更高的螺旋频率。ABO血型与指纹图谱无显著相关性,而指纹图谱与Rh因子有显著相关性(p = 0.0496)。环型在Rh阳性(Rh+ve)个体中更为普遍,而螺旋型在Rh阴性(Rh−ve)个体中更为常见。此外,研究人员还观察到,特定手指更频繁地拥有不同的指纹模式。双手食指以弓状多见,小指和左手中指以环状多见。以双手无名指和右拇指为最常见者。这一发现强化了全球血型和指纹分布模式,Rh+ve个体代表大多数,环状指纹是最主要的指纹模式。性别差异的趋势表明了基因的微妙相互作用,女性的环状结构频率更高,而男性的环状结构频率更高。同样,一些血型的人更有可能表现出一组特定的指纹模式。这项研究清楚地显示了性别差异和遗传因素对指纹模式的影响,特别是Rh因素。这些发现促进了皮肤纹学领域的发展,并对法医学和人口遗传学产生了影响。
{"title":"A Dermatoglyphic Study of Primary Fingerprints Pattern in Relation to Gender and Blood Group Among Residents of Kathmandu Valley, Nepal","authors":"Sushma Paudel,&nbsp;Sushmita Paudel,&nbsp;Samikshya Kafle","doi":"10.1049/bme2/9993120","DOIUrl":"10.1049/bme2/9993120","url":null,"abstract":"<p>Fingerprints are unique biometric identifiers that reflect intricate genetic and environmental/physiological influences. Beyond their forensic significance, they can offer insights into physiological traits like blood groups and gender, which can help in forensic analysis to narrow down the search. This exploratory study aims to identify potential associations between fingerprint patterns, gender, and blood groups within a defined regional cohort in Kathmandu, Nepal. This preliminary study included 290 students (144 males and 146 females) from Himalayan Whitehouse International College (HWIC). Fingerprint patterns (loops, whorls, and arches) were analyzed and compared with participants’ ABO-Rh blood groups. Statistical analyses, including chi-square tests, were used to determine associations and trends. Loops emerged as the most common fingerprint pattern (57.14%), followed by whorls (35%), and arches (7.86%). Blood group B+ve was the most prevalent (33.1%) among the study population in Kathmandu. The significant association between gender and fingerprint pattern was observed. The gender analysis revealed that loops were predominant in females, while males showed a higher frequency of whorls. While no significant relationship was observed between ABO blood groups and fingerprint patterns, a strong association was found between fingerprint patterns and Rh factor (<i>p</i> = 0.0496). Loops were more prevalent among Rh-positive (Rh+ve) individuals, while whorls were more common among Rh-negative (Rh−ve) individuals. Additionally, specific fingers were observed to have distinct fingerprint patterns more frequently. Arches were most prevalent in the index finger of both hands, loops were most abundant in both pinky finger, and left middle finger. Whorls were most frequently observed in ring finger of both hands and right thumb. The findings reinforce global patterns of blood group and fingerprint distribution, where Rh+ve individuals represent the majority and loops are most dominant fingerprint pattern. The gender-specific trends suggest the nuanced interplay of genetics, with females displaying a higher frequency of loops and males showing more whorls. Similarly, some blood group are more likely to exhibit a specific set of fingerprint patterns. This research clearly shows the gender-based differences and influence of genetic factors on fingerprint patterns, particularly the Rh factor. These findings contribute to the growing field of dermatoglyphics, with implications for forensic science, and population genetics.</p>","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":"2025 1","pages":""},"PeriodicalIF":1.8,"publicationDate":"2025-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/bme2/9993120","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144323692","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Advanced Image Quality Assessment for Hand- and Finger-Vein Biometrics 手部和手指静脉生物识别的高级图像质量评估
IF 1.8 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-05-05 DOI: 10.1049/bme2/8869140
Simon Kirchgasser, Christof Kauba, Georg Wimmer, Andreas Uhl

Natural scene statistics commonly used in nonreference image quality measures and a proposed deep-learning (DL)–based quality assessment approach are suggested as biometric quality indicators for vasculature images. While NIQE (natural image quality evaluator) and BRISQUE (blind/referenceless image spatial quality evaluator) if trained in common images with usual distortions do not work well for assessing vasculature pattern samples’ quality, their variants being trained on high- and low-quality vasculature sample data behave as expected from a biometric quality estimator in most cases (deviations from the overall trend occur for certain datasets or feature extraction methods). A DL-based quality metric is proposed in this work and designed to be capable of assigning the correct quality class to the vasculature pattern samples in most cases, independent of finger or hand vein patterns being assessed. The experiments, evaluating NIQE, BRISQUE, and the newly proposed DL quality metrics, were conducted on a total of 13 publicly available finger and hand vein datasets and involve three distinct template representations (two of them especially designed for vascular biometrics). The proposed (trained) quality measure(s) are compared to several classical quality metrics, with their achieved results underlining their promising behavior.

提出了非参考图像质量度量中常用的自然场景统计和基于深度学习(DL)的质量评估方法作为血管图像的生物特征质量指标。虽然NIQE(自然图像质量评估器)和BRISQUE(盲/无参考图像空间质量评估器)如果在具有通常失真的普通图像中进行训练,则不能很好地评估血管模式样本的质量,但在大多数情况下,它们的变体在高质量和低质量血管样本数据上进行训练,其行为与生物特征质量估计器的预期一致(某些数据集或特征提取方法会出现总体趋势的偏差)。在这项工作中提出了一种基于dl的质量度量,旨在能够在大多数情况下为血管模式样本分配正确的质量等级,独立于正在评估的手指或手静脉模式。该实验评估了NIQE、BRISQUE和新提出的DL质量指标,共在13个公开可用的手指和手静脉数据集上进行,涉及三种不同的模板表示(其中两种专门为血管生物识别设计)。建议的(训练的)质量度量与几个经典的质量度量进行比较,它们的实现结果强调了它们有希望的行为。
{"title":"Advanced Image Quality Assessment for Hand- and Finger-Vein Biometrics","authors":"Simon Kirchgasser,&nbsp;Christof Kauba,&nbsp;Georg Wimmer,&nbsp;Andreas Uhl","doi":"10.1049/bme2/8869140","DOIUrl":"10.1049/bme2/8869140","url":null,"abstract":"<p>Natural scene statistics commonly used in nonreference image quality measures and a proposed deep-learning (DL)–based quality assessment approach are suggested as biometric quality indicators for vasculature images. While NIQE (natural image quality evaluator) and BRISQUE (blind/referenceless image spatial quality evaluator) if trained in common images with usual distortions do not work well for assessing vasculature pattern samples’ quality, their variants being trained on high- and low-quality vasculature sample data behave as expected from a biometric quality estimator in most cases (deviations from the overall trend occur for certain datasets or feature extraction methods). A DL-based quality metric is proposed in this work and designed to be capable of assigning the correct quality class to the vasculature pattern samples in most cases, independent of finger or hand vein patterns being assessed. The experiments, evaluating NIQE, BRISQUE, and the newly proposed DL quality metrics, were conducted on a total of 13 publicly available finger and hand vein datasets and involve three distinct template representations (two of them especially designed for vascular biometrics). The proposed (trained) quality measure(s) are compared to several classical quality metrics, with their achieved results underlining their promising behavior.</p>","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":"2025 1","pages":""},"PeriodicalIF":1.8,"publicationDate":"2025-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/bme2/8869140","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143909172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Distillation Hashing for Palmprint and Finger Vein Retrieval 掌纹和指静脉检索的深度蒸馏哈希
IF 1.8 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-04-25 DOI: 10.1049/bme2/9017371
Chenlong Liu, Lu Yang, Wen Zhou, Yuan Li, Fanchang Hao

With the increasing application of biometric recognition technology in daily life, the number of registered users is rapidly growing, making fast retrieval techniques increasingly important for biometric recognition. However, existing biometric recognition models are often overly complex, making them difficult to deploy on resource-constrained terminal devices. Inspired by knowledge distillation (KD) for model simplification and deep hashing for fast image retrieval, we propose a new model that achieves lightweight palmprint and finger vein retrieval. This model integrates hash distillation loss, classification distillation loss, and supervised loss from labels within a KD framework. And it improves the retrieval and recognition performance of the lightweight model through the network design. Experimental results demonstrate that this method promotes the performance of the student model on multiple palmprint and finger vein datasets, with retrieval precision and recognition accuracy surpassing several existing advanced hashing methods.

随着生物特征识别技术在日常生活中的应用越来越广泛,注册用户数量迅速增长,快速检索技术对生物特征识别越来越重要。然而,现有的生物识别模型往往过于复杂,使得它们难以在资源有限的终端设备上部署。受知识蒸馏(knowledge distillation, KD)模型简化和深度哈希(deep hash)图像快速检索的启发,我们提出了一种轻量级掌纹和手指静脉检索的新模型。该模型集成了哈希蒸馏损失、分类蒸馏损失和KD框架内标签的监督损失。并通过网络设计提高了轻量化模型的检索和识别性能。实验结果表明,该方法提高了学生模型在多个掌纹和手指静脉数据集上的性能,检索精度和识别精度超过了现有的几种高级哈希方法。
{"title":"Deep Distillation Hashing for Palmprint and Finger Vein Retrieval","authors":"Chenlong Liu,&nbsp;Lu Yang,&nbsp;Wen Zhou,&nbsp;Yuan Li,&nbsp;Fanchang Hao","doi":"10.1049/bme2/9017371","DOIUrl":"10.1049/bme2/9017371","url":null,"abstract":"<p>With the increasing application of biometric recognition technology in daily life, the number of registered users is rapidly growing, making fast retrieval techniques increasingly important for biometric recognition. However, existing biometric recognition models are often overly complex, making them difficult to deploy on resource-constrained terminal devices. Inspired by knowledge distillation (KD) for model simplification and deep hashing for fast image retrieval, we propose a new model that achieves lightweight palmprint and finger vein retrieval. This model integrates hash distillation loss, classification distillation loss, and supervised loss from labels within a KD framework. And it improves the retrieval and recognition performance of the lightweight model through the network design. Experimental results demonstrate that this method promotes the performance of the student model on multiple palmprint and finger vein datasets, with retrieval precision and recognition accuracy surpassing several existing advanced hashing methods.</p>","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":"2025 1","pages":""},"PeriodicalIF":1.8,"publicationDate":"2025-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/bme2/9017371","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143871822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Wavelet-Based Texture Mining and Enhancement for Face Forgery Detection 基于小波的纹理挖掘与增强人脸伪造检测
IF 1.8 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-02-13 DOI: 10.1049/bme2/2217175
Xin Li, Hui Zhao, Bingxin Xu, Hongzhe Liu

Due to the abuse of deep forgery technology, the research on forgery detection methods has become increasingly urgent. The corresponding relationship between the frequency spectrum information and the spatial clues, which is often neglected by current methods, could be conducive to a more accurate and generalized forgery detection. Motivated by this inspiration, we propose a wavelet-based texture mining and enhancement framework for face forgery detection. First, we introduce a frequency-guided texture enhancement (FGTE) module that mining the high-frequency information to improve the network’s extraction of effective texture features. Next, we propose a global–local feature refinement (GLFR) module to enhance the model’s leverage of both global semantic features and local texture features. Moreover, the interactive fusion module (IFM) is designed to fully incorporate the enhanced texture clues with spatial features. The proposed method has been extensively evaluated on five public datasets, such as FaceForensics++ (FF++), deepfake (DF) detection (DFD) challenge (DFDC), Celeb-DFv2, DFDC preview (DFDC-P), and DFD, for face forgery detection, yielding promising performance within and cross dataset experiments.

由于深度伪造技术的滥用,伪造检测方法的研究日益迫切。频谱信息与空间线索之间的对应关系是当前方法经常忽略的,有助于更准确、更广义的伪造检测。受此启发,我们提出了一种基于小波的纹理挖掘和增强框架,用于人脸伪造检测。首先,引入频率引导纹理增强(FGTE)模块,挖掘高频信息,提高网络对有效纹理特征的提取;接下来,我们提出了一个全局-局部特征细化(GLFR)模块,以增强模型对全局语义特征和局部纹理特征的利用。设计了交互式融合模块(IFM),将增强的纹理线索与空间特征充分融合。所提出的方法已经在5个公共数据集上进行了广泛的评估,如facefrensics ++ (FF++)、deepfake (DF) detection (DFD) challenge (DFDC)、Celeb-DFv2、DFDC preview (DFDC- p)和DFD,用于人脸伪造检测,在数据集内部和跨数据集实验中产生了令人满意的性能。
{"title":"Wavelet-Based Texture Mining and Enhancement for Face Forgery Detection","authors":"Xin Li,&nbsp;Hui Zhao,&nbsp;Bingxin Xu,&nbsp;Hongzhe Liu","doi":"10.1049/bme2/2217175","DOIUrl":"10.1049/bme2/2217175","url":null,"abstract":"<p>Due to the abuse of deep forgery technology, the research on forgery detection methods has become increasingly urgent. The corresponding relationship between the frequency spectrum information and the spatial clues, which is often neglected by current methods, could be conducive to a more accurate and generalized forgery detection. Motivated by this inspiration, we propose a wavelet-based texture mining and enhancement framework for face forgery detection. First, we introduce a frequency-guided texture enhancement (FGTE) module that mining the high-frequency information to improve the network’s extraction of effective texture features. Next, we propose a global–local feature refinement (GLFR) module to enhance the model’s leverage of both global semantic features and local texture features. Moreover, the interactive fusion module (IFM) is designed to fully incorporate the enhanced texture clues with spatial features. The proposed method has been extensively evaluated on five public datasets, such as FaceForensics++ (FF++), deepfake (DF) detection (DFD) challenge (DFDC), Celeb-DFv2, DFDC preview (DFDC-P), and DFD, for face forgery detection, yielding promising performance within and cross dataset experiments.</p>","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":"2025 1","pages":""},"PeriodicalIF":1.8,"publicationDate":"2025-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/bme2/2217175","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143404595","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Product Color Design Concept that Considers Human Emotion Perception: Based on Deep Learning and Cluster Analysis 考虑人类情感感知的产品色彩设计理念:基于深度学习和聚类分析
IF 1.8 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-12-25 DOI: 10.1049/bme2/5576927
Anqi Gao, Yantao Zhong

Emotions play a significant role in how we perceive and interact with products. Thoughtfully designed emotionally appealing products can evoke strong user responses, making them more attractive. Color, as a crucial attribute of products, is a significant aspect to consider in the process of emotional product design. However, users’ emotional perception of product colors is highly intricate and challenging to define. To address this, this research proposes a product color design concept that considers human emotion perception based on deep learning and cluster analysis. First, for a given product, a color style is chosen for rerendering, which is an emotional color image. Different emotional color images have distinct RGB color representations. Second, clustering methods are employed to establish relationships between various emotional color images and different colors, selecting emotionally close style images. Subsequently, through transfer learning techniques, specific grid structures are used to retrain network weights, allowing for the fusion design of style and content images. This process ultimately achieves emotional color rendering design based on emotional color clustering and transfer learning. Multiple sets of emotional color design examples demonstrate that the method proposed in this study can accurately fulfill the emotional color design requirements of products, thereby, offering practical applicability. The satisfaction survey shows that the proposed method has certain guiding significance for clothing emotional color design.

情感在我们如何感知和与产品互动中扮演着重要的角色。经过深思熟虑设计的具有情感吸引力的产品可以引起强烈的用户反应,使其更具吸引力。色彩作为产品的重要属性,是感性产品设计过程中需要考虑的重要方面。然而,用户对产品颜色的情感感知是非常复杂的,很难定义。为了解决这个问题,本研究提出了一种基于深度学习和聚类分析的考虑人类情感感知的产品颜色设计概念。首先,对于给定的产品,选择一种颜色风格进行渲染,这是一种情感色彩图像。不同的情感色彩图像具有不同的RGB色彩表征。其次,采用聚类方法建立各种情感色彩图像与不同颜色之间的关系,选择情感亲近的风格图像。随后,通过迁移学习技术,使用特定的网格结构来重新训练网络权重,从而实现风格和内容图像的融合设计。这个过程最终实现了基于情感色彩聚类和迁移学习的情感色彩渲染设计。多组情感色彩设计实例表明,本文提出的方法能够准确地满足产品的情感色彩设计需求,具有一定的实用性。满意度调查表明,所提出的方法对服装情感色彩设计具有一定的指导意义。
{"title":"Product Color Design Concept that Considers Human Emotion Perception: Based on Deep Learning and Cluster Analysis","authors":"Anqi Gao,&nbsp;Yantao Zhong","doi":"10.1049/bme2/5576927","DOIUrl":"10.1049/bme2/5576927","url":null,"abstract":"<p>Emotions play a significant role in how we perceive and interact with products. Thoughtfully designed emotionally appealing products can evoke strong user responses, making them more attractive. Color, as a crucial attribute of products, is a significant aspect to consider in the process of emotional product design. However, users’ emotional perception of product colors is highly intricate and challenging to define. To address this, this research proposes a product color design concept that considers human emotion perception based on deep learning and cluster analysis. First, for a given product, a color style is chosen for rerendering, which is an emotional color image. Different emotional color images have distinct RGB color representations. Second, clustering methods are employed to establish relationships between various emotional color images and different colors, selecting emotionally close style images. Subsequently, through transfer learning techniques, specific grid structures are used to retrain network weights, allowing for the fusion design of style and content images. This process ultimately achieves emotional color rendering design based on emotional color clustering and transfer learning. Multiple sets of emotional color design examples demonstrate that the method proposed in this study can accurately fulfill the emotional color design requirements of products, thereby, offering practical applicability. The satisfaction survey shows that the proposed method has certain guiding significance for clothing emotional color design.</p>","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":"2024 1","pages":""},"PeriodicalIF":1.8,"publicationDate":"2024-12-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/bme2/5576927","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143118922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Facial and Neck Region Analysis for Deepfake Detection Using Remote Photoplethysmography Signal Similarity 利用远程心动图信号相似性进行面部和颈部区域分析以进行深度伪装检测
IF 1.8 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-21 DOI: 10.1049/bme2/7095412
Byeong Seon An, Hyeji Lim, Hyeon Ah Seong, Eui Chul Lee

Deepfake (DF) involves utilizing artificial intelligence (AI) technology to synthesize or manipulate images, voices, and other human or object data. However, recent times have seen a surge in instances of DF technology misuse, raising concerns about cybercrime and the credibility of manipulated information. The objective of this study is to devise a method that employs remote photoplethysmography (rPPG) biosignals for DF detection. The face was divided into five regions based on landmarks, with automatic extraction performed on the neck region. We conducted rPPG signal extraction from each facial area and the neck region was defined as the ground truth. The five signals extracted from the face were used as inputs to an support vector machine (SVM) model by calculating the euclidean distance between each signal and the signal extracted from the neck region, measuring rPPG signal similarity with five features. Our approach demonstrated robust performance with an area under the curve (AUC) score of 91.2% on the audio-driven dataset and 99.7% on the face swapping generative adversarial network (FSGAN) dataset, even though we only used datasets excluding DF techniques that can be visually identified in Korean DF Detection Dataset (KoDF). Therefore, our research findings demonstrate that similarity features of rPPG signals can be utilized as key features for detecting DFs.

深度伪造(DF)是指利用人工智能(AI)技术合成或篡改图像、声音和其他人类或物体数据。然而,近来DF技术被滥用的情况激增,引发了人们对网络犯罪和被篡改信息可信度的担忧。本研究的目的是设计一种方法,利用远程光感(rPPG)生物信号进行 DF 检测。根据地标将面部分为五个区域,并对颈部区域进行自动提取。我们对每个面部区域进行了 rPPG 信号提取,并将颈部区域定义为地面实况。从面部提取的五个信号被用作支持向量机 (SVM) 模型的输入,方法是计算每个信号与从颈部提取的信号之间的欧氏距离,用五个特征来衡量 rPPG 信号的相似性。尽管我们只使用了不包括韩国 DF 检测数据集(KoDF)中可直观识别的 DF 技术的数据集,但我们的方法在音频驱动数据集和人脸交换生成对抗网络(FSGAN)数据集上分别获得了 91.2% 和 99.7% 的曲线下面积(AUC)分数,表现出了稳健的性能。因此,我们的研究结果表明,rPPG 信号的相似性特征可用作检测 DF 的关键特征。
{"title":"Facial and Neck Region Analysis for Deepfake Detection Using Remote Photoplethysmography Signal Similarity","authors":"Byeong Seon An,&nbsp;Hyeji Lim,&nbsp;Hyeon Ah Seong,&nbsp;Eui Chul Lee","doi":"10.1049/bme2/7095412","DOIUrl":"10.1049/bme2/7095412","url":null,"abstract":"<p>Deepfake (DF) involves utilizing artificial intelligence (AI) technology to synthesize or manipulate images, voices, and other human or object data. However, recent times have seen a surge in instances of DF technology misuse, raising concerns about cybercrime and the credibility of manipulated information. The objective of this study is to devise a method that employs remote photoplethysmography (rPPG) biosignals for DF detection. The face was divided into five regions based on landmarks, with automatic extraction performed on the neck region. We conducted rPPG signal extraction from each facial area and the neck region was defined as the ground truth. The five signals extracted from the face were used as inputs to an support vector machine (SVM) model by calculating the euclidean distance between each signal and the signal extracted from the neck region, measuring rPPG signal similarity with five features. Our approach demonstrated robust performance with an area under the curve (AUC) score of 91.2% on the audio-driven dataset and 99.7% on the face swapping generative adversarial network (FSGAN) dataset, even though we only used datasets excluding DF techniques that can be visually identified in Korean DF Detection Dataset (KoDF). Therefore, our research findings demonstrate that similarity features of rPPG signals can be utilized as key features for detecting DFs.</p>","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":"2024 1","pages":""},"PeriodicalIF":1.8,"publicationDate":"2024-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/bme2/7095412","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142708010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IET Biometrics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1