{"title":"Deep learning based features extraction for facial gender classification using ensemble of machine learning technique","authors":"Fazal Waris, Feipeng Da, Shanghuan Liu","doi":"10.1007/s00530-024-01399-5","DOIUrl":null,"url":null,"abstract":"<p>Accurate and efficient gender recognition is an essential for many applications such as surveillance, security, and biometrics. Recently, deep learning techniques have made remarkable advancements in feature extraction and have become extensively implemented in various applications, including gender classification. However, despite the numerous studies conducted on the problem, correctly recognizing robust and essential features from face images and efficiently distinguishing them with high accuracy in the wild is still a challenging task for real-world applications. This article proposes an approach that combines deep learning and soft voting-based ensemble model to perform automatic gender classification with high accuracy in an unconstrained environment. In the proposed technique, a novel deep convolutional neural network (DCNN) was designed to extract 128 high-quality and accurate features from face images. The StandardScaler method was then used to pre-process these extracted features, and finally, these preprocessed features were classified with soft voting ensemble learning model combining the outputs from several machine learning classifiers such as random forest (RF), support vector machine (SVM), linear discriminant analysis (LDA), logistic regression (LR), gradient boosting classifier (GBC) and XGBoost to improve the prediction accuracy. The experimental study was performed on the UTK, label faces in the wild (LFW), Adience and FEI datasets. The results attained evidently show that the proposed approach outperforms all current approaches in terms of accuracy across all datasets.</p>","PeriodicalId":3,"journal":{"name":"ACS Applied Electronic Materials","volume":null,"pages":null},"PeriodicalIF":4.3000,"publicationDate":"2024-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACS Applied Electronic Materials","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s00530-024-01399-5","RegionNum":3,"RegionCategory":"材料科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
Accurate and efficient gender recognition is an essential for many applications such as surveillance, security, and biometrics. Recently, deep learning techniques have made remarkable advancements in feature extraction and have become extensively implemented in various applications, including gender classification. However, despite the numerous studies conducted on the problem, correctly recognizing robust and essential features from face images and efficiently distinguishing them with high accuracy in the wild is still a challenging task for real-world applications. This article proposes an approach that combines deep learning and soft voting-based ensemble model to perform automatic gender classification with high accuracy in an unconstrained environment. In the proposed technique, a novel deep convolutional neural network (DCNN) was designed to extract 128 high-quality and accurate features from face images. The StandardScaler method was then used to pre-process these extracted features, and finally, these preprocessed features were classified with soft voting ensemble learning model combining the outputs from several machine learning classifiers such as random forest (RF), support vector machine (SVM), linear discriminant analysis (LDA), logistic regression (LR), gradient boosting classifier (GBC) and XGBoost to improve the prediction accuracy. The experimental study was performed on the UTK, label faces in the wild (LFW), Adience and FEI datasets. The results attained evidently show that the proposed approach outperforms all current approaches in terms of accuracy across all datasets.