更聪明地注释,而不是更努力地注释:利用主动学习减少情感注释工作

IF 9.6 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE IEEE Transactions on Affective Computing Pub Date : 2023-11-02 DOI:10.1109/TAFFC.2023.3329563
Soraia M. Alarcão;Vânia Mendonça;Cláudia Sevivas;Carolina Maruta;Manuel J. Fonseca
{"title":"更聪明地注释,而不是更努力地注释:利用主动学习减少情感注释工作","authors":"Soraia M. Alarcão;Vânia Mendonça;Cláudia Sevivas;Carolina Maruta;Manuel J. Fonseca","doi":"10.1109/TAFFC.2023.3329563","DOIUrl":null,"url":null,"abstract":"The success of supervised models for emotion recognition on images heavily depends on the availability of images properly annotated. Although millions of images are presently available, only a few are annotated with reliable emotional information. Current emotion recognition solutions either use large amounts of weakly-labeled web images, which often contain noise that is unrelated to the emotions of the image, or transfer learning, which usually results in performance losses. Thus, it would be desirable to know which images would be useful to be annotated to avoid an extensive annotation effort. In this paper, we propose a novel approach based on active learning to choose which images are more relevant to be annotated. Our approach dynamically combines multiple active learning strategies and learns the best ones (without prior knowledge of the best ones). Experiments using nine benchmark datasets revealed that: (i) active learning allows to reduce the annotation effort, while reaching or surpassing the performance of a supervised baseline with as little as 3% to 18% of the baseline's training set, in classification tasks; (ii) our online combination of multiple strategies converges to the performance of the best individual strategies, while avoiding the experimentation overhead needed to identify them.","PeriodicalId":13131,"journal":{"name":"IEEE Transactions on Affective Computing","volume":"15 3","pages":"1213-1227"},"PeriodicalIF":9.6000,"publicationDate":"2023-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Annotate Smarter, not Harder: Using Active Learning to Reduce Emotional Annotation Effort\",\"authors\":\"Soraia M. Alarcão;Vânia Mendonça;Cláudia Sevivas;Carolina Maruta;Manuel J. Fonseca\",\"doi\":\"10.1109/TAFFC.2023.3329563\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The success of supervised models for emotion recognition on images heavily depends on the availability of images properly annotated. Although millions of images are presently available, only a few are annotated with reliable emotional information. Current emotion recognition solutions either use large amounts of weakly-labeled web images, which often contain noise that is unrelated to the emotions of the image, or transfer learning, which usually results in performance losses. Thus, it would be desirable to know which images would be useful to be annotated to avoid an extensive annotation effort. In this paper, we propose a novel approach based on active learning to choose which images are more relevant to be annotated. Our approach dynamically combines multiple active learning strategies and learns the best ones (without prior knowledge of the best ones). Experiments using nine benchmark datasets revealed that: (i) active learning allows to reduce the annotation effort, while reaching or surpassing the performance of a supervised baseline with as little as 3% to 18% of the baseline's training set, in classification tasks; (ii) our online combination of multiple strategies converges to the performance of the best individual strategies, while avoiding the experimentation overhead needed to identify them.\",\"PeriodicalId\":13131,\"journal\":{\"name\":\"IEEE Transactions on Affective Computing\",\"volume\":\"15 3\",\"pages\":\"1213-1227\"},\"PeriodicalIF\":9.6000,\"publicationDate\":\"2023-11-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Affective Computing\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10305266/\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Affective Computing","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10305266/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

图像情感识别监督模型的成功与否,在很大程度上取决于是否有适当注释的图像。尽管目前有数以百万计的图像可供使用,但只有少数图像标注了可靠的情感信息。目前的情感识别解决方案要么使用大量弱标注的网络图像(其中通常包含与图像情感无关的噪音),要么使用迁移学习,后者通常会导致性能损失。因此,最好能知道哪些图像需要进行标注,以避免大量的标注工作。在本文中,我们提出了一种基于主动学习的新方法,用于选择哪些图像更适合进行注释。我们的方法动态结合多种主动学习策略,并学习最佳策略(无需事先了解最佳策略)。使用九个基准数据集进行的实验表明(i) 在分类任务中,主动学习可以减少注释工作量,同时达到或超过有监督基线的性能,只需基线训练集的 3% 到 18%;(ii) 我们的多策略在线组合收敛到最佳单个策略的性能,同时避免了识别这些策略所需的实验开销。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Annotate Smarter, not Harder: Using Active Learning to Reduce Emotional Annotation Effort
The success of supervised models for emotion recognition on images heavily depends on the availability of images properly annotated. Although millions of images are presently available, only a few are annotated with reliable emotional information. Current emotion recognition solutions either use large amounts of weakly-labeled web images, which often contain noise that is unrelated to the emotions of the image, or transfer learning, which usually results in performance losses. Thus, it would be desirable to know which images would be useful to be annotated to avoid an extensive annotation effort. In this paper, we propose a novel approach based on active learning to choose which images are more relevant to be annotated. Our approach dynamically combines multiple active learning strategies and learns the best ones (without prior knowledge of the best ones). Experiments using nine benchmark datasets revealed that: (i) active learning allows to reduce the annotation effort, while reaching or surpassing the performance of a supervised baseline with as little as 3% to 18% of the baseline's training set, in classification tasks; (ii) our online combination of multiple strategies converges to the performance of the best individual strategies, while avoiding the experimentation overhead needed to identify them.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
IEEE Transactions on Affective Computing
IEEE Transactions on Affective Computing COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE-COMPUTER SCIENCE, CYBERNETICS
CiteScore
15.00
自引率
6.20%
发文量
174
期刊介绍: The IEEE Transactions on Affective Computing is an international and interdisciplinary journal. Its primary goal is to share research findings on the development of systems capable of recognizing, interpreting, and simulating human emotions and related affective phenomena. The journal publishes original research on the underlying principles and theories that explain how and why affective factors shape human-technology interactions. It also focuses on how techniques for sensing and simulating affect can enhance our understanding of human emotions and processes. Additionally, the journal explores the design, implementation, and evaluation of systems that prioritize the consideration of affect in their usability. We also welcome surveys of existing work that provide new perspectives on the historical and future directions of this field.
期刊最新文献
The ForDigitStress Dataset: A Multi-Modal Dataset for Automatic Stress Recognition Individual-Aware Attention Modulation for Unseen Speaker Emotion Recognition Sparse Emotion Dictionary and CWT Spectrogram Fusion with Multi-head Self-Attention for Depression Recognition in Parkinson's Disease Patients A Low-Rank Matching Attention Based Cross-Modal Feature Fusion Method for Conversational Emotion Recognition EEG-Based Cross-Subject Emotion Recognition Using Sparse Bayesian Learning with Enhanced Covariance Alignment
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1