Raihan Islamadina, Khairun Saddami, Maulisa Oktiana, Taufik Fuadi Abidin, R. Muharar, F. Arnia
{"title":"Performance of Deep Learning Benchmark Models on Thermal Imagery of Pain through Facial Expressions","authors":"Raihan Islamadina, Khairun Saddami, Maulisa Oktiana, Taufik Fuadi Abidin, R. Muharar, F. Arnia","doi":"10.1109/COMNETSAT56033.2022.9994546","DOIUrl":null,"url":null,"abstract":"This paper discusses the performance of deep learning models from ResNet, MobileNetV2, and EfficientNet for pain recognition through facial expressions. The dataset used in this paper is a thermal image obtained from the Multimodal Pain Intensity (MintPain) database which is a database for facial pain-level recognition. The deep learning model used has been trained on other datasets and its performance is proven through the transfer learning method. During training, epochs of 5, 20, 40, and 60 were used. We used a minibatch size of 24, the optimizer with a learning rate of 0.001, momentum of 0.9, and the learning rate factor for weight and bias each to 10. The results of the training showed that ResNet, MobileNetV2, and EfficientNet had 100%, 100%, and 99.60% accuracy at epoch 40, respectively. Finally, an evaluation of the performance of each model that has been trained is carried out using the test results. Here, MobileNetV2 is able to correctly classify all test datasets with an accuracy of 82.3%.","PeriodicalId":221444,"journal":{"name":"2022 IEEE International Conference on Communication, Networks and Satellite (COMNETSAT)","volume":"100 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE International Conference on Communication, Networks and Satellite (COMNETSAT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/COMNETSAT56033.2022.9994546","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
This paper discusses the performance of deep learning models from ResNet, MobileNetV2, and EfficientNet for pain recognition through facial expressions. The dataset used in this paper is a thermal image obtained from the Multimodal Pain Intensity (MintPain) database which is a database for facial pain-level recognition. The deep learning model used has been trained on other datasets and its performance is proven through the transfer learning method. During training, epochs of 5, 20, 40, and 60 were used. We used a minibatch size of 24, the optimizer with a learning rate of 0.001, momentum of 0.9, and the learning rate factor for weight and bias each to 10. The results of the training showed that ResNet, MobileNetV2, and EfficientNet had 100%, 100%, and 99.60% accuracy at epoch 40, respectively. Finally, an evaluation of the performance of each model that has been trained is carried out using the test results. Here, MobileNetV2 is able to correctly classify all test datasets with an accuracy of 82.3%.