{"title":"面向教学情境的面部表情数据集开发:初步研究","authors":"Pipit Utami, Rudy Hartanto, I. Soesanti","doi":"10.1109/IC2IE56416.2022.9970043","DOIUrl":null,"url":null,"abstract":"Increasing the FER accuracy can be done with the Deep-CNN model. However, the model requires a dataset in the training and testing process. Meanwhile, there is still a scarcity of facial expression datasets with expressions in specific contexts for emotion recognition. In general, the existing datasets show common expressions. Therefore, this paper proposes a dataset that includes basic and specific complex emotions in teaching contexts that can be used in the Deep-CNN model. The developed dataset consists of six basic expressions, neutral, and five specific expressions in the teaching context, namely anxiety, enjoyment, hope, hopelessness, and shame. The dataset was obtained from 52 respondents. Dataset development methods consist of needs identification, data collection, data validation, data adjustment, data training and data evaluation. Dataset test performance from testing the four Deep-CNN architectures shows that the multiple emotion classes in the dataset can be classified well. Accuracy using simple CNN is 90%, while the three types of Xception vary with values of 88%, 92% and 93%. Likewise, with accuracy, for precision, recall and f1score from the results of testing datasets with four CNN architectures show good values. The training time on simple CNN took 49.55 minutes and for the three types of Xception it was 47.67 minutes, 32.69 minutes, and 32.56 minutes.","PeriodicalId":151165,"journal":{"name":"2022 5th International Conference of Computer and Informatics Engineering (IC2IE)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"The Development of Facial Expressions Dataset for Teaching Context: Preliminary Research\",\"authors\":\"Pipit Utami, Rudy Hartanto, I. Soesanti\",\"doi\":\"10.1109/IC2IE56416.2022.9970043\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Increasing the FER accuracy can be done with the Deep-CNN model. However, the model requires a dataset in the training and testing process. Meanwhile, there is still a scarcity of facial expression datasets with expressions in specific contexts for emotion recognition. In general, the existing datasets show common expressions. Therefore, this paper proposes a dataset that includes basic and specific complex emotions in teaching contexts that can be used in the Deep-CNN model. The developed dataset consists of six basic expressions, neutral, and five specific expressions in the teaching context, namely anxiety, enjoyment, hope, hopelessness, and shame. The dataset was obtained from 52 respondents. Dataset development methods consist of needs identification, data collection, data validation, data adjustment, data training and data evaluation. Dataset test performance from testing the four Deep-CNN architectures shows that the multiple emotion classes in the dataset can be classified well. Accuracy using simple CNN is 90%, while the three types of Xception vary with values of 88%, 92% and 93%. Likewise, with accuracy, for precision, recall and f1score from the results of testing datasets with four CNN architectures show good values. The training time on simple CNN took 49.55 minutes and for the three types of Xception it was 47.67 minutes, 32.69 minutes, and 32.56 minutes.\",\"PeriodicalId\":151165,\"journal\":{\"name\":\"2022 5th International Conference of Computer and Informatics Engineering (IC2IE)\",\"volume\":\"31 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-09-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 5th International Conference of Computer and Informatics Engineering (IC2IE)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IC2IE56416.2022.9970043\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 5th International Conference of Computer and Informatics Engineering (IC2IE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IC2IE56416.2022.9970043","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
The Development of Facial Expressions Dataset for Teaching Context: Preliminary Research
Increasing the FER accuracy can be done with the Deep-CNN model. However, the model requires a dataset in the training and testing process. Meanwhile, there is still a scarcity of facial expression datasets with expressions in specific contexts for emotion recognition. In general, the existing datasets show common expressions. Therefore, this paper proposes a dataset that includes basic and specific complex emotions in teaching contexts that can be used in the Deep-CNN model. The developed dataset consists of six basic expressions, neutral, and five specific expressions in the teaching context, namely anxiety, enjoyment, hope, hopelessness, and shame. The dataset was obtained from 52 respondents. Dataset development methods consist of needs identification, data collection, data validation, data adjustment, data training and data evaluation. Dataset test performance from testing the four Deep-CNN architectures shows that the multiple emotion classes in the dataset can be classified well. Accuracy using simple CNN is 90%, while the three types of Xception vary with values of 88%, 92% and 93%. Likewise, with accuracy, for precision, recall and f1score from the results of testing datasets with four CNN architectures show good values. The training time on simple CNN took 49.55 minutes and for the three types of Xception it was 47.67 minutes, 32.69 minutes, and 32.56 minutes.