首页 > 最新文献

IET Biometrics最新文献

英文 中文
Face Forgery Detection with Long-Range Noise Features and Multilevel Frequency-Aware Clues 利用长距离噪声特征和多级频率感知线索进行人脸伪造检测
IF 2 4区 计算机科学 Q2 Computer Science Pub Date : 2024-02-05 DOI: 10.1049/2024/6523854
Yi Zhao, Xin Jin, Song Gao, Liwen Wu, Shao-qing Yao, Qian Jiang
The widespread dissemination of high-fidelity fake faces created by face forgery techniques has caused serious trust concerns and ethical issues in modern society. Consequently, face forgery detection has emerged as a prominent topic of research to prevent technology abuse. Although, most existing face forgery detectors demonstrate success when evaluating high-quality faces under intra-dataset scenarios, they often overfit manipulation-specific artifacts and lack robustness to postprocessing operations. In this work, we design an innovative dual-branch collaboration framework that leverages the strengths of the transformer and CNN to thoroughly dig into the multimodal forgery artifacts from both a global and local perspective. Specifically, a novel adaptive noise trace enhancement module (ANTEM) is proposed to remove high-level face content while amplifying more generalized forgery artifacts in the noise domain. Then, the transformer-based branch can track long-range noise features. Meanwhile, considering that subtle forgery artifacts could be described in the frequency domain even in a compression scenario, a multilevel frequency-aware module (MFAM) is developed and further applied to the CNN-based branch to extract complementary frequency-aware clues. Besides, we incorporate a collaboration strategy involving cross-entropy loss and single center loss to enhance the learning of more generalized representations by optimizing the fusion features of the dual branch. Extensive experiments on various benchmark datasets substantiate the superior generalization and robustness of our framework when compared to the competing approaches.
人脸伪造技术制造的高仿真假人脸广泛传播,在现代社会引发了严重的信任问题和伦理问题。因此,人脸伪造检测已成为防止技术滥用的一个突出研究课题。尽管大多数现有的人脸伪造检测器在数据集内场景下评估高质量人脸时都取得了成功,但它们往往会过度拟合特定的操纵伪造物,并且缺乏对后处理操作的鲁棒性。在这项工作中,我们设计了一个创新的双分支协作框架,利用变换器和 CNN 的优势,从全局和局部两个角度深入挖掘多模态伪造假象。具体来说,我们提出了一个新颖的自适应噪声痕量增强模块(ANTEM),用于去除高级人脸内容,同时放大噪声域中更广泛的伪造伪迹。然后,基于变压器的分支可以跟踪长距离噪声特征。同时,考虑到即使在压缩情况下,细微的伪造假象也可以在频域中得到描述,因此开发了多级频率感知模块(MFAM),并进一步应用于基于 CNN 的分支,以提取互补的频率感知线索。此外,我们还采用了涉及交叉熵损失和单中心损失的协作策略,通过优化双分支的融合特征来增强对更广义表征的学习。在各种基准数据集上进行的广泛实验证明,与其他竞争方法相比,我们的框架具有更优越的泛化能力和鲁棒性。
{"title":"Face Forgery Detection with Long-Range Noise Features and Multilevel Frequency-Aware Clues","authors":"Yi Zhao, Xin Jin, Song Gao, Liwen Wu, Shao-qing Yao, Qian Jiang","doi":"10.1049/2024/6523854","DOIUrl":"https://doi.org/10.1049/2024/6523854","url":null,"abstract":"The widespread dissemination of high-fidelity fake faces created by face forgery techniques has caused serious trust concerns and ethical issues in modern society. Consequently, face forgery detection has emerged as a prominent topic of research to prevent technology abuse. Although, most existing face forgery detectors demonstrate success when evaluating high-quality faces under intra-dataset scenarios, they often overfit manipulation-specific artifacts and lack robustness to postprocessing operations. In this work, we design an innovative dual-branch collaboration framework that leverages the strengths of the transformer and CNN to thoroughly dig into the multimodal forgery artifacts from both a global and local perspective. Specifically, a novel adaptive noise trace enhancement module (ANTEM) is proposed to remove high-level face content while amplifying more generalized forgery artifacts in the noise domain. Then, the transformer-based branch can track long-range noise features. Meanwhile, considering that subtle forgery artifacts could be described in the frequency domain even in a compression scenario, a multilevel frequency-aware module (MFAM) is developed and further applied to the CNN-based branch to extract complementary frequency-aware clues. Besides, we incorporate a collaboration strategy involving cross-entropy loss and single center loss to enhance the learning of more generalized representations by optimizing the fusion features of the dual branch. Extensive experiments on various benchmark datasets substantiate the superior generalization and robustness of our framework when compared to the competing approaches.","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139802636","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Face Forgery Detection with Long-Range Noise Features and Multilevel Frequency-Aware Clues 利用长距离噪声特征和多级频率感知线索进行人脸伪造检测
IF 2 4区 计算机科学 Q2 Computer Science Pub Date : 2024-02-05 DOI: 10.1049/2024/6523854
Yi Zhao, Xin Jin, Song Gao, Liwen Wu, Shaowen Yao, Qian Jiang

The widespread dissemination of high-fidelity fake faces created by face forgery techniques has caused serious trust concerns and ethical issues in modern society. Consequently, face forgery detection has emerged as a prominent topic of research to prevent technology abuse. Although, most existing face forgery detectors demonstrate success when evaluating high-quality faces under intra-dataset scenarios, they often overfit manipulation-specific artifacts and lack robustness to postprocessing operations. In this work, we design an innovative dual-branch collaboration framework that leverages the strengths of the transformer and CNN to thoroughly dig into the multimodal forgery artifacts from both a global and local perspective. Specifically, a novel adaptive noise trace enhancement module (ANTEM) is proposed to remove high-level face content while amplifying more generalized forgery artifacts in the noise domain. Then, the transformer-based branch can track long-range noise features. Meanwhile, considering that subtle forgery artifacts could be described in the frequency domain even in a compression scenario, a multilevel frequency-aware module (MFAM) is developed and further applied to the CNN-based branch to extract complementary frequency-aware clues. Besides, we incorporate a collaboration strategy involving cross-entropy loss and single center loss to enhance the learning of more generalized representations by optimizing the fusion features of the dual branch. Extensive experiments on various benchmark datasets substantiate the superior generalization and robustness of our framework when compared to the competing approaches.

人脸伪造技术制造的高仿真假人脸广泛传播,在现代社会引发了严重的信任问题和伦理问题。因此,人脸伪造检测已成为防止技术滥用的一个突出研究课题。尽管大多数现有的人脸伪造检测器在数据集内场景下评估高质量人脸时都取得了成功,但它们往往会过度拟合特定的操纵伪造物,并且缺乏对后处理操作的鲁棒性。在这项工作中,我们设计了一个创新的双分支协作框架,利用变换器和 CNN 的优势,从全局和局部两个角度深入挖掘多模态伪造假象。具体来说,我们提出了一个新颖的自适应噪声痕量增强模块(ANTEM),用于去除高级人脸内容,同时放大噪声域中更广泛的伪造伪迹。然后,基于变压器的分支可以跟踪长距离噪声特征。同时,考虑到即使在压缩情况下,细微的伪造假象也可以在频域中得到描述,因此开发了多级频率感知模块(MFAM),并进一步应用于基于 CNN 的分支,以提取互补的频率感知线索。此外,我们还采用了涉及交叉熵损失和单中心损失的协作策略,通过优化双分支的融合特征来增强对更广义表征的学习。在各种基准数据集上进行的广泛实验证明,与其他竞争方法相比,我们的框架具有更优越的泛化能力和鲁棒性。
{"title":"Face Forgery Detection with Long-Range Noise Features and Multilevel Frequency-Aware Clues","authors":"Yi Zhao,&nbsp;Xin Jin,&nbsp;Song Gao,&nbsp;Liwen Wu,&nbsp;Shaowen Yao,&nbsp;Qian Jiang","doi":"10.1049/2024/6523854","DOIUrl":"10.1049/2024/6523854","url":null,"abstract":"<div>\u0000 <p>The widespread dissemination of high-fidelity fake faces created by face forgery techniques has caused serious trust concerns and ethical issues in modern society. Consequently, face forgery detection has emerged as a prominent topic of research to prevent technology abuse. Although, most existing face forgery detectors demonstrate success when evaluating high-quality faces under intra-dataset scenarios, they often overfit manipulation-specific artifacts and lack robustness to postprocessing operations. In this work, we design an innovative dual-branch collaboration framework that leverages the strengths of the transformer and CNN to thoroughly dig into the multimodal forgery artifacts from both a global and local perspective. Specifically, a novel adaptive noise trace enhancement module (ANTEM) is proposed to remove high-level face content while amplifying more generalized forgery artifacts in the noise domain. Then, the transformer-based branch can track long-range noise features. Meanwhile, considering that subtle forgery artifacts could be described in the frequency domain even in a compression scenario, a multilevel frequency-aware module (MFAM) is developed and further applied to the CNN-based branch to extract complementary frequency-aware clues. Besides, we incorporate a collaboration strategy involving cross-entropy loss and single center loss to enhance the learning of more generalized representations by optimizing the fusion features of the dual branch. Extensive experiments on various benchmark datasets substantiate the superior generalization and robustness of our framework when compared to the competing approaches.</p>\u0000 </div>","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/2024/6523854","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139862462","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Impact of Illumination on Finger Vascular Pattern Recognition 照明对手指血管模式识别的影响
IF 2 4区 计算机科学 Q2 Computer Science Pub Date : 2024-02-03 DOI: 10.1049/2024/4413655
P. Normakristagaluh, G. Laanstra, Luuk J. Spreeuwers, Raymond N. J. Veldhuis
This paper studies the impact of illumination direction and bundle width on finger vascular pattern imaging and recognition performance. A qualitative theoretical model is presented to explain the projection of finger blood vessels on the skin. A series of experiments were conducted using a scanner of our design with illumination from the top, a single-direction side (left or right), and narrow or wide beams. A new dataset was collected for the experiments, containing 4,428 NIR images of finger vein patterns captured under well-controlled conditions to minimize position and rotation angle differences between different sessions. Top illumination performs well because of more homogenous, which enhances a larger number of visible veins. Narrower bundles of light do not affect which veins are visible, but they reduce the overexposure at finger boundaries and increase the quality of vascular pattern images. The narrow beam achieves the best performance with 0% of FNMR@FMR0.01%, and the wide beam consistently results in a higher false nonmatch rate. The comparison of left- and right-side illumination has the highest error rates because only the veins in the middle of the finger are visible in both images. Different directional illumination may be interoperable since they produce the same vascular pattern and principally are the projected shadows on the finger surface. Score and image fusion for right- and left-side result in recognition performance similar to that obtained with top illumination, indicating the vein patterns are independent of illumination direction. All results of these experiments support the proposed model.
本文研究了光照方向和光束宽度对手指血管图案成像和识别性能的影响。本文提出了一个定性理论模型来解释手指血管在皮肤上的投影。使用我们设计的扫描仪进行了一系列实验,实验中采用了从顶部、单方向(左侧或右侧)、窄光束或宽光束进行照明。实验收集了一个新的数据集,其中包含 4428 张近红外图像,这些图像是在控制良好的条件下采集的手指静脉图案,以最大限度地减少不同环节之间的位置和旋转角度差异。顶部照明表现良好,因为它更均匀,能增强更多的可见静脉。较窄的光束不会影响哪些静脉可见,但会减少手指边界的过度曝光,提高血管图案图像的质量。窄光束的性能最佳,FNMR@FMR0.01%,而宽光束则始终导致较高的错误不匹配率。左侧和右侧照明对比的错误率最高,因为在两幅图像中都只能看到手指中间的静脉。不同方向的照明可能是互通的,因为它们产生相同的血管模式,主要是手指表面的投影阴影。右侧和左侧的评分和图像融合结果与顶部照明下的识别性能相似,表明静脉图案与照明方向无关。所有这些实验结果都支持所提出的模型。
{"title":"The Impact of Illumination on Finger Vascular Pattern Recognition","authors":"P. Normakristagaluh, G. Laanstra, Luuk J. Spreeuwers, Raymond N. J. Veldhuis","doi":"10.1049/2024/4413655","DOIUrl":"https://doi.org/10.1049/2024/4413655","url":null,"abstract":"This paper studies the impact of illumination direction and bundle width on finger vascular pattern imaging and recognition performance. A qualitative theoretical model is presented to explain the projection of finger blood vessels on the skin. A series of experiments were conducted using a scanner of our design with illumination from the top, a single-direction side (left or right), and narrow or wide beams. A new dataset was collected for the experiments, containing 4,428 NIR images of finger vein patterns captured under well-controlled conditions to minimize position and rotation angle differences between different sessions. Top illumination performs well because of more homogenous, which enhances a larger number of visible veins. Narrower bundles of light do not affect which veins are visible, but they reduce the overexposure at finger boundaries and increase the quality of vascular pattern images. The narrow beam achieves the best performance with 0% of FNMR@FMR0.01%, and the wide beam consistently results in a higher false nonmatch rate. The comparison of left- and right-side illumination has the highest error rates because only the veins in the middle of the finger are visible in both images. Different directional illumination may be interoperable since they produce the same vascular pattern and principally are the projected shadows on the finger surface. Score and image fusion for right- and left-side result in recognition performance similar to that obtained with top illumination, indicating the vein patterns are independent of illumination direction. All results of these experiments support the proposed model.","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139867791","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Impact of Illumination on Finger Vascular Pattern Recognition 照明对手指血管模式识别的影响
IF 2 4区 计算机科学 Q2 Computer Science Pub Date : 2024-02-03 DOI: 10.1049/2024/4413655
Pesigrihastamadya Normakristagaluh, Geert J. Laanstra, Luuk J. Spreeuwers, Raymond N. J. Veldhuis

This paper studies the impact of illumination direction and bundle width on finger vascular pattern imaging and recognition performance. A qualitative theoretical model is presented to explain the projection of finger blood vessels on the skin. A series of experiments were conducted using a scanner of our design with illumination from the top, a single-direction side (left or right), and narrow or wide beams. A new dataset was collected for the experiments, containing 4,428 NIR images of finger vein patterns captured under well-controlled conditions to minimize position and rotation angle differences between different sessions. Top illumination performs well because of more homogenous, which enhances a larger number of visible veins. Narrower bundles of light do not affect which veins are visible, but they reduce the overexposure at finger boundaries and increase the quality of vascular pattern images. The narrow beam achieves the best performance with 0% of [email protected]%, and the wide beam consistently results in a higher false nonmatch rate. The comparison of left- and right-side illumination has the highest error rates because only the veins in the middle of the finger are visible in both images. Different directional illumination may be interoperable since they produce the same vascular pattern and principally are the projected shadows on the finger surface. Score and image fusion for right- and left-side result in recognition performance similar to that obtained with top illumination, indicating the vein patterns are independent of illumination direction. All results of these experiments support the proposed model.

本文研究了光照方向和光束宽度对手指血管图案成像和识别性能的影响。本文提出了一个定性理论模型来解释手指血管在皮肤上的投影。使用我们设计的扫描仪进行了一系列实验,实验中采用了从顶部、单方向(左侧或右侧)、窄光束或宽光束进行照明。实验收集了一个新的数据集,其中包含 4428 张近红外图像,这些图像是在控制良好的条件下采集的手指静脉图案,以最大限度地减少不同环节之间的位置和旋转角度差异。顶部照明表现良好,因为它更均匀,能增强更多的可见静脉。较窄的光束不会影响哪些静脉可见,但会减少手指边界的过度曝光,提高血管图案图像的质量。窄光束的性能最佳,FNMR@FMR0.01%,而宽光束则始终导致较高的错误不匹配率。左侧和右侧照明对比的错误率最高,因为在两幅图像中都只能看到手指中间的静脉。不同方向的照明可能是互通的,因为它们产生相同的血管模式,主要是手指表面的投影阴影。右侧和左侧的评分和图像融合结果与顶部照明下的识别性能相似,表明静脉图案与照明方向无关。所有这些实验结果都支持所提出的模型。
{"title":"The Impact of Illumination on Finger Vascular Pattern Recognition","authors":"Pesigrihastamadya Normakristagaluh,&nbsp;Geert J. Laanstra,&nbsp;Luuk J. Spreeuwers,&nbsp;Raymond N. J. Veldhuis","doi":"10.1049/2024/4413655","DOIUrl":"10.1049/2024/4413655","url":null,"abstract":"<div>\u0000 <p>This paper studies the impact of illumination direction and bundle width on finger vascular pattern imaging and recognition performance. A qualitative theoretical model is presented to explain the projection of finger blood vessels on the skin. A series of experiments were conducted using a scanner of our design with illumination from the top, a single-direction side (left or right), and narrow or wide beams. A new dataset was collected for the experiments, containing 4,428 NIR images of finger vein patterns captured under well-controlled conditions to minimize position and rotation angle differences between different sessions. Top illumination performs well because of more homogenous, which enhances a larger number of visible veins. Narrower bundles of light do not affect which veins are visible, but they reduce the overexposure at finger boundaries and increase the quality of vascular pattern images. The narrow beam achieves the best performance with 0% of [email protected]%, and the wide beam consistently results in a higher false nonmatch rate. The comparison of left- and right-side illumination has the highest error rates because only the veins in the middle of the finger are visible in both images. Different directional illumination may be interoperable since they produce the same vascular pattern and principally are the projected shadows on the finger surface. Score and image fusion for right- and left-side result in recognition performance similar to that obtained with top illumination, indicating the vein patterns are independent of illumination direction. All results of these experiments support the proposed model.</p>\u0000 </div>","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/2024/4413655","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139807886","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Impact of Occlusion Masks on Gender Classification from Iris Texture 遮挡蒙版对根据虹膜纹理进行性别分类的影响
IF 2 4区 计算机科学 Q2 Computer Science Pub Date : 2024-01-27 DOI: 10.1049/2024/8526857
Claudio Yáñez, Juan E. Tapia, Claudio A. Perez, Christoph Busch

Gender classification on normalized iris images has been previously attempted with varying degrees of success. In these previous studies, it has been shown that occlusion masks may introduce gender information; occlusion masks are used in iris recognition to remove non-iris elements. When, the goal is to classify the gender using exclusively the iris texture, the presence of gender information in the masks may result in apparently higher accuracy, thereby not reflecting the actual gender information present in the iris. However, no measures have been taken to eliminate this information while preserving as much iris information as possible. We propose a novel method to assess the gender information present in the iris more accurately by eliminating gender information in the masks. This consists of pairing iris with similar masks and different gender, generating a paired mask using the OR operator, and applying this mask to the iris. Additionally, we manually fix iris segmentation errors to study their impact on the gender classification. Our results show that occlusion masks can account for 6.92% of the gender classification accuracy on average. Therefore, works aiming to perform gender classification using the iris texture from normalized iris images should eliminate this correlation.

以前曾尝试过对归一化虹膜图像进行性别分类,并取得了不同程度的成功。之前的研究表明,遮挡蒙版可能会引入性别信息;遮挡蒙版在虹膜识别中用于去除非虹膜元素。当目标是仅使用虹膜纹理进行性别分类时,遮挡蒙版中的性别信息可能会导致表面上更高的准确率,从而无法反映虹膜中的实际性别信息。然而,目前还没有采取任何措施来消除这些信息,同时尽可能多地保留虹膜信息。我们提出了一种新方法,通过消除面具中的性别信息来更准确地评估虹膜中的性别信息。这包括将具有相似掩码和不同性别的虹膜配对,使用 OR 运算符生成配对掩码,并将此掩码应用于虹膜。此外,我们还手动修正了虹膜分割错误,以研究其对性别分类的影响。我们的结果表明,闭塞掩码平均可影响 6.92% 的性别分类准确率。因此,旨在利用归一化虹膜图像中的虹膜纹理进行性别分类的工作应消除这种相关性。
{"title":"Impact of Occlusion Masks on Gender Classification from Iris Texture","authors":"Claudio Yáñez,&nbsp;Juan E. Tapia,&nbsp;Claudio A. Perez,&nbsp;Christoph Busch","doi":"10.1049/2024/8526857","DOIUrl":"10.1049/2024/8526857","url":null,"abstract":"<div>\u0000 <p>Gender classification on normalized iris images has been previously attempted with varying degrees of success. In these previous studies, it has been shown that occlusion masks may introduce gender information; occlusion masks are used in iris recognition to remove non-iris elements. When, the goal is to classify the gender using exclusively the iris texture, the presence of gender information in the masks may result in apparently higher accuracy, thereby not reflecting the actual gender information present in the iris. However, no measures have been taken to eliminate this information while preserving as much iris information as possible. We propose a novel method to assess the gender information present in the iris more accurately by eliminating gender information in the masks. This consists of pairing iris with similar masks and different gender, generating a paired mask using the OR operator, and applying this mask to the iris. Additionally, we manually fix iris segmentation errors to study their impact on the gender classification. Our results show that occlusion masks can account for 6.92% of the gender classification accuracy on average. Therefore, works aiming to perform gender classification using the iris texture from normalized iris images should eliminate this correlation.</p>\u0000 </div>","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/2024/8526857","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140492836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Noncontact Palm Vein ROI Extraction Based on Improved Lightweight HRnet in Complex Backgrounds 复杂背景下基于改进的轻量级 HRnet 的非接触式手掌静脉 ROI 提取
IF 2 4区 计算机科学 Q2 Computer Science Pub Date : 2024-01-17 DOI: 10.1049/2024/4924184
Fen Dai, Ziyang Wang, Xiangqun Zou, Rongwen Zhang, Xiaoling Deng

The extraction of ROI (region of interest) was a key step in noncontact palm vein recognition, which was crucial for the subsequent feature extraction and feature matching. A noncontact palm vein ROI extraction algorithm based on the improved HRnet for keypoints localization was proposed for dealing with hand gesture irregularities, translation, scaling, and rotation in complex backgrounds. To reduce the computation time and model size for ultimate deploying in low-cost embedded systems, this improved HRnet was designed to be lightweight by reconstructing the residual block structure and adopting depth-separable convolution, which greatly reduced the model size and improved the inference speed of network forward propagation. Next, the palm vein ROI localization and palm vein recognition are processed in self-built dataset and two public datasets (CASIA and TJU-PV). The proposed improved HRnet algorithm achieved 97.36% accuracy for keypoints detection on self-built palm vein dataset and 98.23% and 98.74% accuracy for keypoints detection on two public palm vein datasets (CASIA and TJU-PV), respectively. The model size was only 0.45 M, and on a CPU with a clock speed of 3 GHz, the average running time of ROI extraction for one image was 0.029 s. Based on the keypoints and corresponding ROI extraction, the equal error rate (EER) of palm vein recognition was 0.000362%, 0.014541%, and 0.005951% and the false nonmatch rate was 0.000001%, 11.034725%, and 4.613714% (false match rate: 0.01%) in the self-built dataset, TJU-PV, and CASIA, respectively. The experimental result showed that the proposed algorithm was feasible and effective and provided a reliable experimental basis for the research of palm vein recognition technology.

ROI(感兴趣区域)的提取是非接触式手掌静脉识别的关键步骤,对于后续的特征提取和特征匹配至关重要。本文提出了一种基于改进的关键点定位 HRnet 的非接触式手掌静脉 ROI 提取算法,用于处理复杂背景下的手势不规则性、平移、缩放和旋转等问题。为了减少计算时间和模型大小,以便最终部署到低成本嵌入式系统中,该改进型 HRnet 通过重构残差块结构和采用深度分离卷积实现了轻量级设计,从而大大减少了模型大小,提高了网络前向传播的推理速度。接下来,在自建数据集和两个公共数据集(CASIA和TJU-PV)中处理了掌静脉ROI定位和掌静脉识别。改进后的 HRnet 算法在自建手掌静脉数据集上的关键点检测准确率达到 97.36%,在两个公共手掌静脉数据集(CASIA 和 TJU-PV)上的关键点检测准确率分别达到 98.23% 和 98.74%。模型大小仅为 0.45 M,在主频为 3 GHz 的 CPU 上,一幅图像的 ROI 提取平均运行时间为 0.029 s。根据关键点和相应的 ROI 提取结果,在自建数据集、TJU-PV 和 CASIA 中,手掌静脉识别的平均错误率(EER)分别为 0.000362%、0.014541% 和 0.005951%,错误不匹配率分别为 0.000001%、11.034725% 和 4.613714%(错误匹配率:0.01%)。实验结果表明,所提出的算法是可行的、有效的,为掌静脉识别技术的研究提供了可靠的实验依据。
{"title":"Noncontact Palm Vein ROI Extraction Based on Improved Lightweight HRnet in Complex Backgrounds","authors":"Fen Dai,&nbsp;Ziyang Wang,&nbsp;Xiangqun Zou,&nbsp;Rongwen Zhang,&nbsp;Xiaoling Deng","doi":"10.1049/2024/4924184","DOIUrl":"10.1049/2024/4924184","url":null,"abstract":"<div>\u0000 <p>The extraction of ROI (region of interest) was a key step in noncontact palm vein recognition, which was crucial for the subsequent feature extraction and feature matching. A noncontact palm vein ROI extraction algorithm based on the improved HRnet for keypoints localization was proposed for dealing with hand gesture irregularities, translation, scaling, and rotation in complex backgrounds. To reduce the computation time and model size for ultimate deploying in low-cost embedded systems, this improved HRnet was designed to be lightweight by reconstructing the residual block structure and adopting depth-separable convolution, which greatly reduced the model size and improved the inference speed of network forward propagation. Next, the palm vein ROI localization and palm vein recognition are processed in self-built dataset and two public datasets (CASIA and TJU-PV). The proposed improved HRnet algorithm achieved 97.36% accuracy for keypoints detection on self-built palm vein dataset and 98.23% and 98.74% accuracy for keypoints detection on two public palm vein datasets (CASIA and TJU-PV), respectively. The model size was only 0.45 M, and on a CPU with a clock speed of 3 GHz, the average running time of ROI extraction for one image was 0.029 s. Based on the keypoints and corresponding ROI extraction, the equal error rate (EER) of palm vein recognition was 0.000362%, 0.014541%, and 0.005951% and the false nonmatch rate was 0.000001%, 11.034725%, and 4.613714% (false match rate: 0.01%) in the self-built dataset, TJU-PV, and CASIA, respectively. The experimental result showed that the proposed algorithm was feasible and effective and provided a reliable experimental basis for the research of palm vein recognition technology.</p>\u0000 </div>","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/2024/4924184","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139526814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improving Sensor Interoperability between Contactless and Contact-Based Fingerprints Using Pose Correction and Unwarping 利用姿态校正和纠偏改进非接触式指纹和接触式指纹传感器之间的互操作性
IF 2 4区 计算机科学 Q2 Computer Science Pub Date : 2023-12-18 DOI: 10.1049/2023/7519499
L. Ruzicka, Dominik Söllinger, Bernhard Kohn, Clemens Heitzinger, Andreas Uhl, Bernhard Strobl
Current fingerprint identification systems face significant challenges in achieving interoperability between contact-based and contactless fingerprint sensors. In contrast to existing literature, we propose a novel approach that can combine pose correction with further enhancement operations. It uses deep learning models to steer the correction of the viewing angle, therefore enhancing the matching features of contactless fingerprints. The proposed approach was tested on real data of 78 participants (37,162 contactless fingerprints) acquired by national police officers using both contact-based and contactless sensors. The study found that the effectiveness of pose correction and unwarping varied significantly based on the individual characteristics of each fingerprint. However, when the various extension methods were combined on a finger-wise basis, an average decrease of 36.9% in equal error rates (EERs) was observed. Additionally, the combined impact of pose correction and bidirectional unwarping led to an average increase of 3.72% in NFIQ 2 scores across all fingers, coupled with a 6.4% decrease in EERs relative to the baseline. The addition of deep learning techniques presents a promising approach for achieving high-quality fingerprint acquisition using contactless sensors, enhancing recognition accuracy in various domains.
当前的指纹识别系统在实现接触式和非接触式指纹传感器之间的互操作性方面面临着巨大挑战。与现有文献相比,我们提出了一种新方法,可将姿势校正与进一步增强操作结合起来。它利用深度学习模型来引导视角修正,从而增强非接触式指纹的匹配特征。所提出的方法在国家警察使用接触式和非接触式传感器获取的 78 名参与者(37,162 个非接触式指纹)的真实数据上进行了测试。研究发现,根据每个指纹的不同特征,姿态校正和解压缩的效果差异很大。然而,当各种扩展方法按手指组合使用时,平均错误率(EER)降低了 36.9%。此外,在姿势校正和双向解压缩的共同作用下,所有手指的 NFIQ 2 分数平均提高了 3.72%,同时 EER 相对于基线降低了 6.4%。深度学习技术的加入为使用非接触式传感器实现高质量指纹采集、提高各领域的识别准确率提供了一种前景广阔的方法。
{"title":"Improving Sensor Interoperability between Contactless and Contact-Based Fingerprints Using Pose Correction and Unwarping","authors":"L. Ruzicka, Dominik Söllinger, Bernhard Kohn, Clemens Heitzinger, Andreas Uhl, Bernhard Strobl","doi":"10.1049/2023/7519499","DOIUrl":"https://doi.org/10.1049/2023/7519499","url":null,"abstract":"Current fingerprint identification systems face significant challenges in achieving interoperability between contact-based and contactless fingerprint sensors. In contrast to existing literature, we propose a novel approach that can combine pose correction with further enhancement operations. It uses deep learning models to steer the correction of the viewing angle, therefore enhancing the matching features of contactless fingerprints. The proposed approach was tested on real data of 78 participants (37,162 contactless fingerprints) acquired by national police officers using both contact-based and contactless sensors. The study found that the effectiveness of pose correction and unwarping varied significantly based on the individual characteristics of each fingerprint. However, when the various extension methods were combined on a finger-wise basis, an average decrease of 36.9% in equal error rates (EERs) was observed. Additionally, the combined impact of pose correction and bidirectional unwarping led to an average increase of 3.72% in NFIQ 2 scores across all fingers, coupled with a 6.4% decrease in EERs relative to the baseline. The addition of deep learning techniques presents a promising approach for achieving high-quality fingerprint acquisition using contactless sensors, enhancing recognition accuracy in various domains.","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2023-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139175263","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptive Weighted Face Alignment by Multi-Scale Feature and Offset Prediction 通过多尺度特征和偏移预测进行自适应加权人脸对齐
IF 2 4区 计算机科学 Q2 Computer Science Pub Date : 2023-12-06 DOI: 10.1049/2023/6636386
Jingwen Li, Jiuzhen Liang, Hao Liu, Zhenjie Hou
Traditional heatmap regression methods have some problems such as the lower limit of theoretical error and the lack of global constraints, which may lead to the collapse of the results in practical application. In this paper, we develop a facial landmark detection model aided by offset prediction to constrain the global shape. First, the hybrid detection model is used to roughly locate the initial coordinates predicted by the backbone network. At the same time, the head rotation attitude prediction module is added to the backbone network, and the Euler angle is used as the adaptive weight to modify the loss function so that the model has better robustness to the large pose image. Then, we introduce an offset prediction network. It uses the heatmap corresponding to the initial coordinates as an attention mask to fuze with the features, so the network can focus on the area around landmarks. This model shares the global features and regresses the offset relative to the real coordinates based on the initial coordinates to further enhance the continuity. In addition, we also add a multi-scale feature pre-extraction module to preprocess features so that we can increase feature scales and receptive fields. Experiments on several challenging public datasets show that our method gets better performance than the existing detection methods, confirming the effectiveness of our method.
传统的热图回归方法存在理论误差下限和缺乏全局约束等问题,在实际应用中可能导致结果的崩溃。在本文中,我们开发了一种辅助偏移预测的面部地标检测模型来约束全局形状。首先,利用混合检测模型对主干网预测的初始坐标进行粗略定位;同时,在骨干网络中加入头部旋转姿态预测模块,并以欧拉角作为自适应权值对损失函数进行修正,使模型对大姿态图像具有更好的鲁棒性。然后,我们引入了一个偏移预测网络。它使用与初始坐标相对应的热图作为注意力掩模来融合特征,从而使网络能够聚焦于地标周围的区域。该模型共享全局特征,并在初始坐标的基础上回归相对于真实坐标的偏移量,进一步增强了连续性。此外,我们还增加了一个多尺度特征预提取模块来预处理特征,这样我们可以增加特征尺度和接受域。在几个具有挑战性的公共数据集上的实验表明,我们的方法比现有的检测方法得到了更好的性能,证实了我们的方法的有效性。
{"title":"Adaptive Weighted Face Alignment by Multi-Scale Feature and Offset Prediction","authors":"Jingwen Li, Jiuzhen Liang, Hao Liu, Zhenjie Hou","doi":"10.1049/2023/6636386","DOIUrl":"https://doi.org/10.1049/2023/6636386","url":null,"abstract":"Traditional heatmap regression methods have some problems such as the lower limit of theoretical error and the lack of global constraints, which may lead to the collapse of the results in practical application. In this paper, we develop a facial landmark detection model aided by offset prediction to constrain the global shape. First, the hybrid detection model is used to roughly locate the initial coordinates predicted by the backbone network. At the same time, the head rotation attitude prediction module is added to the backbone network, and the Euler angle is used as the adaptive weight to modify the loss function so that the model has better robustness to the large pose image. Then, we introduce an offset prediction network. It uses the heatmap corresponding to the initial coordinates as an attention mask to fuze with the features, so the network can focus on the area around landmarks. This model shares the global features and regresses the offset relative to the real coordinates based on the initial coordinates to further enhance the continuity. In addition, we also add a multi-scale feature pre-extraction module to preprocess features so that we can increase feature scales and receptive fields. Experiments on several challenging public datasets show that our method gets better performance than the existing detection methods, confirming the effectiveness of our method.","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2023-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138596728","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatic Signature Verifier Using Gaussian Gated Recurrent Unit Neural Network 基于高斯门控循环单元神经网络的自动签名验证器
4区 计算机科学 Q2 Computer Science Pub Date : 2023-11-14 DOI: 10.1049/2023/5087083
Sameera Khan, Dileep Kumar Singh, Mahesh Singh, Desta Faltaso Mena
Handwritten signatures are one of the most extensively utilized biometrics used for authentication, and forgeries of this behavioral biometric are quite widespread. Biometric databases are also difficult to access for training purposes due to privacy issues. The efficiency of automated authentication systems has been severely harmed as a result of this. Verification of static handwritten signatures with high efficiency remains an open research problem to date. This paper proposes an innovative introselect median filter for preprocessing and a novel Gaussian gated recurrent unit neural network (2GRUNN) as a classifier for designing an automatic verifier for handwritten signatures. The proposed classifier has achieved an FPR of 1.82 and an FNR of 3.03. The efficacy of the proposed method has been compared with the various existing neural network-based verifiers.
手写签名是最广泛应用于身份验证的生物特征之一,而这种行为生物特征的伪造也相当普遍。由于隐私问题,生物识别数据库也难以用于培训目的。因此,自动认证系统的效率受到了严重损害。静态手写签名的高效验证至今仍是一个有待研究的问题。本文提出了一种新颖的内参选择中值滤波器用于预处理,一种新颖的高斯门控递归单元神经网络(2GRUNN)作为分类器用于设计手写签名的自动验证器。该分类器的FPR为1.82,FNR为3.03。将该方法的有效性与现有的各种基于神经网络的验证器进行了比较。
{"title":"Automatic Signature Verifier Using Gaussian Gated Recurrent Unit Neural Network","authors":"Sameera Khan, Dileep Kumar Singh, Mahesh Singh, Desta Faltaso Mena","doi":"10.1049/2023/5087083","DOIUrl":"https://doi.org/10.1049/2023/5087083","url":null,"abstract":"Handwritten signatures are one of the most extensively utilized biometrics used for authentication, and forgeries of this behavioral biometric are quite widespread. Biometric databases are also difficult to access for training purposes due to privacy issues. The efficiency of automated authentication systems has been severely harmed as a result of this. Verification of static handwritten signatures with high efficiency remains an open research problem to date. This paper proposes an innovative introselect median filter for preprocessing and a novel Gaussian gated recurrent unit neural network (2GRUNN) as a classifier for designing an automatic verifier for handwritten signatures. The proposed classifier has achieved an FPR of 1.82 and an FNR of 3.03. The efficacy of the proposed method has been compared with the various existing neural network-based verifiers.","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134957429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Worst-Case Morphs Using Wasserstein ALI and Improved MIPGAN 基于Wasserstein ALI和改进MIPGAN的最坏情况变形
4区 计算机科学 Q2 Computer Science Pub Date : 2023-11-10 DOI: 10.1049/2023/9353816
U. M. Kelly, M. Nauta, L. Liu, L. J. Spreeuwers, R. N. J. Veldhuis
A morph is a combination of two separate facial images and contains the identity information of two different people. When used in an identity document, both people can be authenticated by a biometric face recognition (FR) system. Morphs can be generated using either a landmark-based approach or approaches based on deep learning, such as generative adversarial networks (GANs). In a recent paper, we introduced a worst-case upper bound on how challenging morphing attacks can be for an FR system. The closer morphs are to this upper bound, the bigger the challenge they pose to FR. We introduced an approach with which it was possible to generate morphs that approximate this upper bound for a known FR system (white box) but not for unknown (black box) FR systems. In this paper, we introduce a morph generation method that can approximate worst-case morphs even when the FR system is not known. A key contribution is that we include the goal of generating difficult morphs during training. Our method is based on adversarially learned inference (ALI) and uses concepts from Wasserstein GANs trained with gradient penalty, which were introduced to stabilise the training of GANs. We include these concepts to achieve a similar improvement in training stability and call the resulting method Wasserstein ALI (WALI). We finetune WALI using loss functions designed specifically to improve the ability to manipulate identity information in facial images and show how it can generate morphs that are more challenging for FR systems than landmark- or GAN-based morphs. We also show how our findings can be used to improve MIPGAN, an existing StyleGAN-based morph generator.
变形是两个独立的面部图像的组合,包含两个不同的人的身份信息。当用于身份证件时,两个人都可以通过生物面部识别(FR)系统进行身份验证。形态可以使用基于里程碑的方法或基于深度学习的方法生成,例如生成对抗网络(gan)。在最近的一篇论文中,我们引入了一个最坏情况上界,说明变形攻击对FR系统的挑战性。形态越接近这个上限,它们对FR构成的挑战就越大。我们引入了一种方法,可以为已知FR系统(白盒)生成近似这个上限的形态,但不适合未知FR系统(黑盒)。在本文中,我们介绍了一种变形生成方法,即使在FR系统未知的情况下,也能近似出最坏情况的变形。一个关键的贡献是我们在训练过程中包含了生成困难变形的目标。我们的方法基于对抗学习推理(ALI),并使用梯度惩罚训练的Wasserstein gan的概念,引入梯度惩罚是为了稳定gan的训练。我们将这些概念纳入训练稳定性的类似改进中,并将得到的方法称为Wasserstein ALI (WALI)。我们使用专门设计的损失函数对WALI进行微调,以提高在面部图像中操纵身份信息的能力,并展示它如何生成对FR系统来说比基于地标或gan的变体更具挑战性的变体。我们还展示了如何将我们的发现用于改进现有的基于stylegan的形态生成器MIPGAN。
{"title":"Worst-Case Morphs Using Wasserstein ALI and Improved MIPGAN","authors":"U. M. Kelly, M. Nauta, L. Liu, L. J. Spreeuwers, R. N. J. Veldhuis","doi":"10.1049/2023/9353816","DOIUrl":"https://doi.org/10.1049/2023/9353816","url":null,"abstract":"A morph is a combination of two separate facial images and contains the identity information of two different people. When used in an identity document, both people can be authenticated by a biometric face recognition (FR) system. Morphs can be generated using either a landmark-based approach or approaches based on deep learning, such as generative adversarial networks (GANs). In a recent paper, we introduced a worst-case upper bound on how challenging morphing attacks can be for an FR system. The closer morphs are to this upper bound, the bigger the challenge they pose to FR. We introduced an approach with which it was possible to generate morphs that approximate this upper bound for a known FR system (white box) but not for unknown (black box) FR systems. In this paper, we introduce a morph generation method that can approximate worst-case morphs even when the FR system is not known. A key contribution is that we include the goal of generating difficult morphs during training. Our method is based on adversarially learned inference (ALI) and uses concepts from Wasserstein GANs trained with gradient penalty, which were introduced to stabilise the training of GANs. We include these concepts to achieve a similar improvement in training stability and call the resulting method Wasserstein ALI (WALI). We finetune WALI using loss functions designed specifically to improve the ability to manipulate identity information in facial images and show how it can generate morphs that are more challenging for FR systems than landmark- or GAN-based morphs. We also show how our findings can be used to improve MIPGAN, an existing StyleGAN-based morph generator.","PeriodicalId":48821,"journal":{"name":"IET Biometrics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135091763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IET Biometrics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1