Elizabeth Tran, Michael B. Mayhew, Hyojin Kim, P. Karande, A. Kaplan
{"title":"Facial Expression Recognition Using a Large Out-of-Context Dataset","authors":"Elizabeth Tran, Michael B. Mayhew, Hyojin Kim, P. Karande, A. Kaplan","doi":"10.1109/WACVW.2018.00012","DOIUrl":null,"url":null,"abstract":"We develop a method for emotion recognition from facial imagery. This problem is challenging in part because of the subjectivity of ground truth labels and in part because of the relatively small size of existing labeled datasets. We use the FER+ dataset [8], a dataset with multiple emotion labels per image, in order to build an emotion recognition model that encompasses a full range of emotions. Since the amount of data in the FER+ dataset is limited, we explore the use of a much larger face dataset, MS-Celeb-1M [41], in conjunction with the FER+ dataset. Specific layers within an Inception-ResNet-v1 [13, 38] model trained for facial recognition are used for the emotion recognition problem. Thus, we leverage the MS-Celeb-1M dataset in addition to the FER+ dataset and experiment with different architectures to assess the overall performance of neural networks to recognize emotion using facial imagery.","PeriodicalId":301220,"journal":{"name":"2018 IEEE Winter Applications of Computer Vision Workshops (WACVW)","volume":"468 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 IEEE Winter Applications of Computer Vision Workshops (WACVW)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/WACVW.2018.00012","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6
Abstract
We develop a method for emotion recognition from facial imagery. This problem is challenging in part because of the subjectivity of ground truth labels and in part because of the relatively small size of existing labeled datasets. We use the FER+ dataset [8], a dataset with multiple emotion labels per image, in order to build an emotion recognition model that encompasses a full range of emotions. Since the amount of data in the FER+ dataset is limited, we explore the use of a much larger face dataset, MS-Celeb-1M [41], in conjunction with the FER+ dataset. Specific layers within an Inception-ResNet-v1 [13, 38] model trained for facial recognition are used for the emotion recognition problem. Thus, we leverage the MS-Celeb-1M dataset in addition to the FER+ dataset and experiment with different architectures to assess the overall performance of neural networks to recognize emotion using facial imagery.