Abdoul Kamal Assouma, Tahirou Djara, Abdou Wahidi Bello, Abdou-Aziz Sobabe, Antoine Vianou, Wilfried Tomenou
{"title":"基于卷积神经网络和元数据特征融合模型的人脸识别","authors":"Abdoul Kamal Assouma, Tahirou Djara, Abdou Wahidi Bello, Abdou-Aziz Sobabe, Antoine Vianou, Wilfried Tomenou","doi":"10.9734/cjast/2023/v42i394256","DOIUrl":null,"url":null,"abstract":"Recent advances in science and technology are raising ever-increasing security issues. In response, traditional authentication systems based on knowledge or possession have been developed, but these soon came up against limitations in terms of security and practicality. To overcome these limitations, other systems based on the individual's unique characteristics, known as biometric modalities, were developed. Of the various ways of improving the performance of biometric systems, feature fusion and the joint use of a pure biometric modality and a soft biometric modality (multi-origin biometrics) are highly promising. Unfortunately, however, we note a virtual absence of multi-origin systems in a feature fusion strategy. For our work, we therefore set out to design such a multi-origin system fusing facial features and skin color. Using OpenCV (Open Computer Vision) and Python, we extracted facial features and merged them with skin color to characterize each individual. The HOG (Histogram of Oriented Gradients) algorithm was used for face detection, and Google's deep neural network for encoding. For skin color, segmentation in the HSV (Hue, Saturation, Value) color space enabled us to isolate the skin in each image, and thanks to the k-means algorithm we had detected the dominant skin colors. The system designed in this way enabled us to go from 81.8% as a TR (Recognition Rate) with the face alone to 86.8% after fusion for a TFA (False Acceptance Rate) set at 0.1% and from 0.6% as a TEE (Equal Error Rate) to 0.55%.","PeriodicalId":10730,"journal":{"name":"Current Journal of Applied Science and Technology","volume":"3 2","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Face Recognition Using Convolutional Neural Networks and Metadata in a Feature Fusion Model\",\"authors\":\"Abdoul Kamal Assouma, Tahirou Djara, Abdou Wahidi Bello, Abdou-Aziz Sobabe, Antoine Vianou, Wilfried Tomenou\",\"doi\":\"10.9734/cjast/2023/v42i394256\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Recent advances in science and technology are raising ever-increasing security issues. In response, traditional authentication systems based on knowledge or possession have been developed, but these soon came up against limitations in terms of security and practicality. To overcome these limitations, other systems based on the individual's unique characteristics, known as biometric modalities, were developed. Of the various ways of improving the performance of biometric systems, feature fusion and the joint use of a pure biometric modality and a soft biometric modality (multi-origin biometrics) are highly promising. Unfortunately, however, we note a virtual absence of multi-origin systems in a feature fusion strategy. For our work, we therefore set out to design such a multi-origin system fusing facial features and skin color. Using OpenCV (Open Computer Vision) and Python, we extracted facial features and merged them with skin color to characterize each individual. The HOG (Histogram of Oriented Gradients) algorithm was used for face detection, and Google's deep neural network for encoding. For skin color, segmentation in the HSV (Hue, Saturation, Value) color space enabled us to isolate the skin in each image, and thanks to the k-means algorithm we had detected the dominant skin colors. The system designed in this way enabled us to go from 81.8% as a TR (Recognition Rate) with the face alone to 86.8% after fusion for a TFA (False Acceptance Rate) set at 0.1% and from 0.6% as a TEE (Equal Error Rate) to 0.55%.\",\"PeriodicalId\":10730,\"journal\":{\"name\":\"Current Journal of Applied Science and Technology\",\"volume\":\"3 2\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-10-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Current Journal of Applied Science and Technology\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.9734/cjast/2023/v42i394256\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Current Journal of Applied Science and Technology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.9734/cjast/2023/v42i394256","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
摘要
近年来科学技术的进步引发了日益严重的安全问题。因此,基于知识或所有权的传统身份验证系统得到了发展,但这些系统很快就遇到了安全性和实用性方面的限制。为了克服这些限制,基于个人独特特征的其他系统被开发出来,称为生物识别模式。在提高生物识别系统性能的各种方法中,特征融合以及纯生物识别模式和软生物识别模式(多源生物识别)的联合使用是非常有前途的。然而,不幸的是,我们注意到在特征融合策略中几乎没有多起源系统。因此,在我们的工作中,我们开始设计这样一个融合面部特征和肤色的多原点系统。使用OpenCV(开放计算机视觉)和Python,我们提取面部特征并将其与肤色合并以表征每个个体。采用HOG (Histogram of Oriented Gradients)算法进行人脸检测,谷歌的深度神经网络进行编码。对于肤色,HSV (Hue, Saturation, Value)色彩空间的分割使我们能够在每张图像中隔离皮肤,并且由于k-means算法,我们已经检测到优势肤色。以这种方式设计的系统使我们能够将单独使用人脸的TR(识别率)从81.8%提高到融合后设置为0.1%的TFA(错误接受率)后的86.8%,并将TEE(相等错误率)从0.6%提高到0.55%。
Face Recognition Using Convolutional Neural Networks and Metadata in a Feature Fusion Model
Recent advances in science and technology are raising ever-increasing security issues. In response, traditional authentication systems based on knowledge or possession have been developed, but these soon came up against limitations in terms of security and practicality. To overcome these limitations, other systems based on the individual's unique characteristics, known as biometric modalities, were developed. Of the various ways of improving the performance of biometric systems, feature fusion and the joint use of a pure biometric modality and a soft biometric modality (multi-origin biometrics) are highly promising. Unfortunately, however, we note a virtual absence of multi-origin systems in a feature fusion strategy. For our work, we therefore set out to design such a multi-origin system fusing facial features and skin color. Using OpenCV (Open Computer Vision) and Python, we extracted facial features and merged them with skin color to characterize each individual. The HOG (Histogram of Oriented Gradients) algorithm was used for face detection, and Google's deep neural network for encoding. For skin color, segmentation in the HSV (Hue, Saturation, Value) color space enabled us to isolate the skin in each image, and thanks to the k-means algorithm we had detected the dominant skin colors. The system designed in this way enabled us to go from 81.8% as a TR (Recognition Rate) with the face alone to 86.8% after fusion for a TFA (False Acceptance Rate) set at 0.1% and from 0.6% as a TEE (Equal Error Rate) to 0.55%.