Face Recognition Using Convolutional Neural Networks and Metadata in a Feature Fusion Model

Abdoul Kamal Assouma, Tahirou Djara, Abdou Wahidi Bello, Abdou-Aziz Sobabe, Antoine Vianou, Wilfried Tomenou
{"title":"Face Recognition Using Convolutional Neural Networks and Metadata in a Feature Fusion Model","authors":"Abdoul Kamal Assouma, Tahirou Djara, Abdou Wahidi Bello, Abdou-Aziz Sobabe, Antoine Vianou, Wilfried Tomenou","doi":"10.9734/cjast/2023/v42i394256","DOIUrl":null,"url":null,"abstract":"Recent advances in science and technology are raising ever-increasing security issues. In response, traditional authentication systems based on knowledge or possession have been developed, but these soon came up against limitations in terms of security and practicality. To overcome these limitations, other systems based on the individual's unique characteristics, known as biometric modalities, were developed. Of the various ways of improving the performance of biometric systems, feature fusion and the joint use of a pure biometric modality and a soft biometric modality (multi-origin biometrics) are highly promising. Unfortunately, however, we note a virtual absence of multi-origin systems in a feature fusion strategy. For our work, we therefore set out to design such a multi-origin system fusing facial features and skin color. Using OpenCV (Open Computer Vision) and Python, we extracted facial features and merged them with skin color to characterize each individual. The HOG (Histogram of Oriented Gradients) algorithm was used for face detection, and Google's deep neural network for encoding. For skin color, segmentation in the HSV (Hue, Saturation, Value) color space enabled us to isolate the skin in each image, and thanks to the k-means algorithm we had detected the dominant skin colors. The system designed in this way enabled us to go from 81.8% as a TR (Recognition Rate) with the face alone to 86.8% after fusion for a TFA (False Acceptance Rate) set at 0.1% and from 0.6% as a TEE (Equal Error Rate) to 0.55%.","PeriodicalId":10730,"journal":{"name":"Current Journal of Applied Science and Technology","volume":"3 2","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Current Journal of Applied Science and Technology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.9734/cjast/2023/v42i394256","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Recent advances in science and technology are raising ever-increasing security issues. In response, traditional authentication systems based on knowledge or possession have been developed, but these soon came up against limitations in terms of security and practicality. To overcome these limitations, other systems based on the individual's unique characteristics, known as biometric modalities, were developed. Of the various ways of improving the performance of biometric systems, feature fusion and the joint use of a pure biometric modality and a soft biometric modality (multi-origin biometrics) are highly promising. Unfortunately, however, we note a virtual absence of multi-origin systems in a feature fusion strategy. For our work, we therefore set out to design such a multi-origin system fusing facial features and skin color. Using OpenCV (Open Computer Vision) and Python, we extracted facial features and merged them with skin color to characterize each individual. The HOG (Histogram of Oriented Gradients) algorithm was used for face detection, and Google's deep neural network for encoding. For skin color, segmentation in the HSV (Hue, Saturation, Value) color space enabled us to isolate the skin in each image, and thanks to the k-means algorithm we had detected the dominant skin colors. The system designed in this way enabled us to go from 81.8% as a TR (Recognition Rate) with the face alone to 86.8% after fusion for a TFA (False Acceptance Rate) set at 0.1% and from 0.6% as a TEE (Equal Error Rate) to 0.55%.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于卷积神经网络和元数据特征融合模型的人脸识别
近年来科学技术的进步引发了日益严重的安全问题。因此,基于知识或所有权的传统身份验证系统得到了发展,但这些系统很快就遇到了安全性和实用性方面的限制。为了克服这些限制,基于个人独特特征的其他系统被开发出来,称为生物识别模式。在提高生物识别系统性能的各种方法中,特征融合以及纯生物识别模式和软生物识别模式(多源生物识别)的联合使用是非常有前途的。然而,不幸的是,我们注意到在特征融合策略中几乎没有多起源系统。因此,在我们的工作中,我们开始设计这样一个融合面部特征和肤色的多原点系统。使用OpenCV(开放计算机视觉)和Python,我们提取面部特征并将其与肤色合并以表征每个个体。采用HOG (Histogram of Oriented Gradients)算法进行人脸检测,谷歌的深度神经网络进行编码。对于肤色,HSV (Hue, Saturation, Value)色彩空间的分割使我们能够在每张图像中隔离皮肤,并且由于k-means算法,我们已经检测到优势肤色。以这种方式设计的系统使我们能够将单独使用人脸的TR(识别率)从81.8%提高到融合后设置为0.1%的TFA(错误接受率)后的86.8%,并将TEE(相等错误率)从0.6%提高到0.55%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Identifying Optimal Conceptual Design of a Briquetting Machine Acceptability of Coconut Meat Sisig Recipe in St. Paul University Surigao, Philippines Predictive Modeling and Analysis of Thermal Failure in Plastic and Composite Gears Using VDI Method Approach Identification of Constraints in Poultry Farming in Reasi District of Jammu and Kashmir, India Effect of Foliar Application of Various Concentrations of NAA and GA3 on Fruiting, Yield and Quality Attributes of Ber cv. Banarasi Karaka, India
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1