{"title":"Classification Emotion Using Densenet","authors":"Juan Liang","doi":"10.1109/ISAIEE57420.2022.00043","DOIUrl":null,"url":null,"abstract":"Facial expressions are a way of expressing human emotions. Using standard convolutional neural networks (CNN) to extract and classify facial expressions has not achieved satisfactory accuracy in the past, making the classification of facial emotions a challenge, due to lack of large dataset and advanced CNN models. In this paper, a transfer learning approach is used to improve emotion recognition accuracy. Firstly, a Densenet model pre-trained in the ImageNet dataset is chosen. Next, fine-tuning is performed on the network model, which aims to extract features from images to recognize and classify seven emotional expressions. It features fewer parameters and is resistant to overfitting compared to other models. The model is trained using the facial expression dataset, which make up of 28,709 48*48 pixel images. Experimental results show that the model is better in accuracy compared to the performance of vgg19.","PeriodicalId":345703,"journal":{"name":"2022 International Symposium on Advances in Informatics, Electronics and Education (ISAIEE)","volume":"64 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 International Symposium on Advances in Informatics, Electronics and Education (ISAIEE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISAIEE57420.2022.00043","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Facial expressions are a way of expressing human emotions. Using standard convolutional neural networks (CNN) to extract and classify facial expressions has not achieved satisfactory accuracy in the past, making the classification of facial emotions a challenge, due to lack of large dataset and advanced CNN models. In this paper, a transfer learning approach is used to improve emotion recognition accuracy. Firstly, a Densenet model pre-trained in the ImageNet dataset is chosen. Next, fine-tuning is performed on the network model, which aims to extract features from images to recognize and classify seven emotional expressions. It features fewer parameters and is resistant to overfitting compared to other models. The model is trained using the facial expression dataset, which make up of 28,709 48*48 pixel images. Experimental results show that the model is better in accuracy compared to the performance of vgg19.