{"title":"3DEmo:用于新数据集的人像情感识别","authors":"Shao Liu, Sos S. Agaian","doi":"10.1145/3631133","DOIUrl":null,"url":null,"abstract":"Emotional Expression Recognition (EER) and Facial Expression Recognition (FER) are active research areas in the affective computing field, which involves studying human emotion, recognition, and sentiment analysis. The main objective of this research is to develop algorithms that can accurately interpret and estimate human emotions from portrait images. The emotions depicted in a portrait can reflect various factors such as psychological and physiological states, the artist’s emotional responses, social and environmental aspects, and the period in which the painting was created. This task is challenging because (i) the portraits are often depicted in an artistic or stylized manner rather than realistically or naturally, (ii) the texture and color features obtained from natural faces and paintings differ, affecting the success rate of emotion recognition algorithms, and (iii) it is a new research area, where practically we don’t have visual arts portrait facial emotion estimation models or datasets. To address these challenges, we need a new class of tools and a database specifically tailored to analyze portrait images. This study aims to develop art portrait emotion recognition methods and create a new digital portrait dataset containing 927 images. The proposed model is based on (i) a 3-dimensional estimation of emotions learned by a deep neural network and (ii) a novel deep learning module (3DEmo) that could be easily integrated into existing FER models. To evaluate the effectiveness of the developed models, we also tested their robustness on a facial emotion recognition dataset. The extensive simulation results show that the presented approach outperforms established methods. We expect that this dataset and the developed new tools will encourage further research in recognizing emotions in portrait paintings and predicting artists’ emotions in the painting period based on their artwork.","PeriodicalId":54310,"journal":{"name":"ACM Journal on Computing and Cultural Heritage","volume":"10 2","pages":"0"},"PeriodicalIF":2.1000,"publicationDate":"2023-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"3DEmo: for Portrait Emotion Recognition with New Dataset\",\"authors\":\"Shao Liu, Sos S. Agaian\",\"doi\":\"10.1145/3631133\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Emotional Expression Recognition (EER) and Facial Expression Recognition (FER) are active research areas in the affective computing field, which involves studying human emotion, recognition, and sentiment analysis. The main objective of this research is to develop algorithms that can accurately interpret and estimate human emotions from portrait images. The emotions depicted in a portrait can reflect various factors such as psychological and physiological states, the artist’s emotional responses, social and environmental aspects, and the period in which the painting was created. This task is challenging because (i) the portraits are often depicted in an artistic or stylized manner rather than realistically or naturally, (ii) the texture and color features obtained from natural faces and paintings differ, affecting the success rate of emotion recognition algorithms, and (iii) it is a new research area, where practically we don’t have visual arts portrait facial emotion estimation models or datasets. To address these challenges, we need a new class of tools and a database specifically tailored to analyze portrait images. This study aims to develop art portrait emotion recognition methods and create a new digital portrait dataset containing 927 images. The proposed model is based on (i) a 3-dimensional estimation of emotions learned by a deep neural network and (ii) a novel deep learning module (3DEmo) that could be easily integrated into existing FER models. To evaluate the effectiveness of the developed models, we also tested their robustness on a facial emotion recognition dataset. The extensive simulation results show that the presented approach outperforms established methods. We expect that this dataset and the developed new tools will encourage further research in recognizing emotions in portrait paintings and predicting artists’ emotions in the painting period based on their artwork.\",\"PeriodicalId\":54310,\"journal\":{\"name\":\"ACM Journal on Computing and Cultural Heritage\",\"volume\":\"10 2\",\"pages\":\"0\"},\"PeriodicalIF\":2.1000,\"publicationDate\":\"2023-11-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ACM Journal on Computing and Cultural Heritage\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3631133\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM Journal on Computing and Cultural Heritage","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3631133","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
3DEmo: for Portrait Emotion Recognition with New Dataset
Emotional Expression Recognition (EER) and Facial Expression Recognition (FER) are active research areas in the affective computing field, which involves studying human emotion, recognition, and sentiment analysis. The main objective of this research is to develop algorithms that can accurately interpret and estimate human emotions from portrait images. The emotions depicted in a portrait can reflect various factors such as psychological and physiological states, the artist’s emotional responses, social and environmental aspects, and the period in which the painting was created. This task is challenging because (i) the portraits are often depicted in an artistic or stylized manner rather than realistically or naturally, (ii) the texture and color features obtained from natural faces and paintings differ, affecting the success rate of emotion recognition algorithms, and (iii) it is a new research area, where practically we don’t have visual arts portrait facial emotion estimation models or datasets. To address these challenges, we need a new class of tools and a database specifically tailored to analyze portrait images. This study aims to develop art portrait emotion recognition methods and create a new digital portrait dataset containing 927 images. The proposed model is based on (i) a 3-dimensional estimation of emotions learned by a deep neural network and (ii) a novel deep learning module (3DEmo) that could be easily integrated into existing FER models. To evaluate the effectiveness of the developed models, we also tested their robustness on a facial emotion recognition dataset. The extensive simulation results show that the presented approach outperforms established methods. We expect that this dataset and the developed new tools will encourage further research in recognizing emotions in portrait paintings and predicting artists’ emotions in the painting period based on their artwork.
期刊介绍:
ACM Journal on Computing and Cultural Heritage (JOCCH) publishes papers of significant and lasting value in all areas relating to the use of information and communication technologies (ICT) in support of Cultural Heritage. The journal encourages the submission of manuscripts that demonstrate innovative use of technology for the discovery, analysis, interpretation and presentation of cultural material, as well as manuscripts that illustrate applications in the Cultural Heritage sector that challenge the computational technologies and suggest new research opportunities in computer science.