{"title":"LUCFER: A Large-Scale Context-Sensitive Image Dataset for Deep Learning of Visual Emotions","authors":"Pooyan Balouchian, M. Safaei, H. Foroosh","doi":"10.1109/WACV.2019.00180","DOIUrl":null,"url":null,"abstract":"Still image emotion recognition has been receiving increasing attention in recent years due to the tremendous amount of social media content available on the Web. Opinion mining, visual emotion analysis, search and retrieval are among the application areas, to name a few. While there exist works on the subject, offering methods to detect image sentiment; i.e. recognizing the polarity of the image, less efforts focus on emotion analysis; i.e. dealing with recognizing the exact emotion aroused when exposed to certain visual stimuli. Main gaps tackled in this work include (1) lack of large-scale image datasets for deep learning of visual emotions and (2) lack of context-sensitive single-modality approaches in emotion analysis in the still image domain. In this paper, we introduce LUCFER (Pronounced LU-CI-FER), a dataset containing over 3.6M images, with 3-dimensional labels; i.e. emotion, context and valence. LUCFER, the largest dataset of the kind currently available, is collected using a novel data collection pipeline, proposed and implemented in this work. Moreover, we train a context-sensitive deep classifier using a novel multinomial classification technique proposed here via adding a dimensionality reduction layer to the CNN. Relying on our categorical approach to emotion recognition, we claim and show empirically that injecting context to our unified training process helps (1) achieve a more balanced precision and recall, and (2) boost performance, yielding an overall classification accuracy of 73.12% compared to 58.3% achieved in the closest work in the literature.","PeriodicalId":436637,"journal":{"name":"2019 IEEE Winter Conference on Applications of Computer Vision (WACV)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"11","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE Winter Conference on Applications of Computer Vision (WACV)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/WACV.2019.00180","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 11
Abstract
Still image emotion recognition has been receiving increasing attention in recent years due to the tremendous amount of social media content available on the Web. Opinion mining, visual emotion analysis, search and retrieval are among the application areas, to name a few. While there exist works on the subject, offering methods to detect image sentiment; i.e. recognizing the polarity of the image, less efforts focus on emotion analysis; i.e. dealing with recognizing the exact emotion aroused when exposed to certain visual stimuli. Main gaps tackled in this work include (1) lack of large-scale image datasets for deep learning of visual emotions and (2) lack of context-sensitive single-modality approaches in emotion analysis in the still image domain. In this paper, we introduce LUCFER (Pronounced LU-CI-FER), a dataset containing over 3.6M images, with 3-dimensional labels; i.e. emotion, context and valence. LUCFER, the largest dataset of the kind currently available, is collected using a novel data collection pipeline, proposed and implemented in this work. Moreover, we train a context-sensitive deep classifier using a novel multinomial classification technique proposed here via adding a dimensionality reduction layer to the CNN. Relying on our categorical approach to emotion recognition, we claim and show empirically that injecting context to our unified training process helps (1) achieve a more balanced precision and recall, and (2) boost performance, yielding an overall classification accuracy of 73.12% compared to 58.3% achieved in the closest work in the literature.