Soraia M. Alarcão;Vânia Mendonça;Cláudia Sevivas;Carolina Maruta;Manuel J. Fonseca
{"title":"更聪明地注释,而不是更努力地注释:利用主动学习减少情感注释工作","authors":"Soraia M. Alarcão;Vânia Mendonça;Cláudia Sevivas;Carolina Maruta;Manuel J. Fonseca","doi":"10.1109/TAFFC.2023.3329563","DOIUrl":null,"url":null,"abstract":"The success of supervised models for emotion recognition on images heavily depends on the availability of images properly annotated. Although millions of images are presently available, only a few are annotated with reliable emotional information. Current emotion recognition solutions either use large amounts of weakly-labeled web images, which often contain noise that is unrelated to the emotions of the image, or transfer learning, which usually results in performance losses. Thus, it would be desirable to know which images would be useful to be annotated to avoid an extensive annotation effort. In this paper, we propose a novel approach based on active learning to choose which images are more relevant to be annotated. Our approach dynamically combines multiple active learning strategies and learns the best ones (without prior knowledge of the best ones). Experiments using nine benchmark datasets revealed that: (i) active learning allows to reduce the annotation effort, while reaching or surpassing the performance of a supervised baseline with as little as 3% to 18% of the baseline's training set, in classification tasks; (ii) our online combination of multiple strategies converges to the performance of the best individual strategies, while avoiding the experimentation overhead needed to identify them.","PeriodicalId":13131,"journal":{"name":"IEEE Transactions on Affective Computing","volume":"15 3","pages":"1213-1227"},"PeriodicalIF":9.6000,"publicationDate":"2023-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Annotate Smarter, not Harder: Using Active Learning to Reduce Emotional Annotation Effort\",\"authors\":\"Soraia M. Alarcão;Vânia Mendonça;Cláudia Sevivas;Carolina Maruta;Manuel J. Fonseca\",\"doi\":\"10.1109/TAFFC.2023.3329563\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The success of supervised models for emotion recognition on images heavily depends on the availability of images properly annotated. Although millions of images are presently available, only a few are annotated with reliable emotional information. Current emotion recognition solutions either use large amounts of weakly-labeled web images, which often contain noise that is unrelated to the emotions of the image, or transfer learning, which usually results in performance losses. Thus, it would be desirable to know which images would be useful to be annotated to avoid an extensive annotation effort. In this paper, we propose a novel approach based on active learning to choose which images are more relevant to be annotated. Our approach dynamically combines multiple active learning strategies and learns the best ones (without prior knowledge of the best ones). Experiments using nine benchmark datasets revealed that: (i) active learning allows to reduce the annotation effort, while reaching or surpassing the performance of a supervised baseline with as little as 3% to 18% of the baseline's training set, in classification tasks; (ii) our online combination of multiple strategies converges to the performance of the best individual strategies, while avoiding the experimentation overhead needed to identify them.\",\"PeriodicalId\":13131,\"journal\":{\"name\":\"IEEE Transactions on Affective Computing\",\"volume\":\"15 3\",\"pages\":\"1213-1227\"},\"PeriodicalIF\":9.6000,\"publicationDate\":\"2023-11-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Affective Computing\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10305266/\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Affective Computing","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10305266/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Annotate Smarter, not Harder: Using Active Learning to Reduce Emotional Annotation Effort
The success of supervised models for emotion recognition on images heavily depends on the availability of images properly annotated. Although millions of images are presently available, only a few are annotated with reliable emotional information. Current emotion recognition solutions either use large amounts of weakly-labeled web images, which often contain noise that is unrelated to the emotions of the image, or transfer learning, which usually results in performance losses. Thus, it would be desirable to know which images would be useful to be annotated to avoid an extensive annotation effort. In this paper, we propose a novel approach based on active learning to choose which images are more relevant to be annotated. Our approach dynamically combines multiple active learning strategies and learns the best ones (without prior knowledge of the best ones). Experiments using nine benchmark datasets revealed that: (i) active learning allows to reduce the annotation effort, while reaching or surpassing the performance of a supervised baseline with as little as 3% to 18% of the baseline's training set, in classification tasks; (ii) our online combination of multiple strategies converges to the performance of the best individual strategies, while avoiding the experimentation overhead needed to identify them.
期刊介绍:
The IEEE Transactions on Affective Computing is an international and interdisciplinary journal. Its primary goal is to share research findings on the development of systems capable of recognizing, interpreting, and simulating human emotions and related affective phenomena. The journal publishes original research on the underlying principles and theories that explain how and why affective factors shape human-technology interactions. It also focuses on how techniques for sensing and simulating affect can enhance our understanding of human emotions and processes. Additionally, the journal explores the design, implementation, and evaluation of systems that prioritize the consideration of affect in their usability. We also welcome surveys of existing work that provide new perspectives on the historical and future directions of this field.