Sanghyub John Lee, JongYoon Lim, Leo Paas, Ho Seok Ahn
{"title":"变压器迁移学习情绪检测模型:在大数据中同步社会认同情绪和自我报告情绪。","authors":"Sanghyub John Lee, JongYoon Lim, Leo Paas, Ho Seok Ahn","doi":"10.1007/s00521-023-08276-8","DOIUrl":null,"url":null,"abstract":"<p><p>Tactics to determine the emotions of authors of texts such as Twitter messages often rely on multiple annotators who label relatively small data sets of text passages. An alternative method gathers large text databases that contain the authors' self-reported emotions, to which artificial intelligence, machine learning, and natural language processing tools can be applied. Both approaches have strength and weaknesses. Emotions evaluated by a few human annotators are susceptible to idiosyncratic biases that reflect the characteristics of the annotators. But models based on large, self-reported emotion data sets may overlook subtle, social emotions that human annotators can recognize. In seeking to establish a means to train emotion detection models so that they can achieve good performance in different contexts, the current study proposes a novel transformer transfer learning approach that parallels human development stages: (1) detect emotions reported by the texts' authors and (2) synchronize the model with social emotions identified in annotator-rated emotion data sets. The analysis, based on a large, novel, self-reported emotion data set (<i>n</i> = 3,654,544) and applied to 10 previously published data sets, shows that the transfer learning emotion model achieves relatively strong performance.</p>","PeriodicalId":49766,"journal":{"name":"Neural Computing & Applications","volume":"35 15","pages":"10945-10956"},"PeriodicalIF":4.5000,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9879253/pdf/","citationCount":"4","resultStr":"{\"title\":\"Transformer transfer learning emotion detection model: synchronizing socially agreed and self-reported emotions in big data.\",\"authors\":\"Sanghyub John Lee, JongYoon Lim, Leo Paas, Ho Seok Ahn\",\"doi\":\"10.1007/s00521-023-08276-8\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Tactics to determine the emotions of authors of texts such as Twitter messages often rely on multiple annotators who label relatively small data sets of text passages. An alternative method gathers large text databases that contain the authors' self-reported emotions, to which artificial intelligence, machine learning, and natural language processing tools can be applied. Both approaches have strength and weaknesses. Emotions evaluated by a few human annotators are susceptible to idiosyncratic biases that reflect the characteristics of the annotators. But models based on large, self-reported emotion data sets may overlook subtle, social emotions that human annotators can recognize. In seeking to establish a means to train emotion detection models so that they can achieve good performance in different contexts, the current study proposes a novel transformer transfer learning approach that parallels human development stages: (1) detect emotions reported by the texts' authors and (2) synchronize the model with social emotions identified in annotator-rated emotion data sets. The analysis, based on a large, novel, self-reported emotion data set (<i>n</i> = 3,654,544) and applied to 10 previously published data sets, shows that the transfer learning emotion model achieves relatively strong performance.</p>\",\"PeriodicalId\":49766,\"journal\":{\"name\":\"Neural Computing & Applications\",\"volume\":\"35 15\",\"pages\":\"10945-10956\"},\"PeriodicalIF\":4.5000,\"publicationDate\":\"2023-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9879253/pdf/\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Neural Computing & Applications\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1007/s00521-023-08276-8\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Computing & Applications","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s00521-023-08276-8","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Transformer transfer learning emotion detection model: synchronizing socially agreed and self-reported emotions in big data.
Tactics to determine the emotions of authors of texts such as Twitter messages often rely on multiple annotators who label relatively small data sets of text passages. An alternative method gathers large text databases that contain the authors' self-reported emotions, to which artificial intelligence, machine learning, and natural language processing tools can be applied. Both approaches have strength and weaknesses. Emotions evaluated by a few human annotators are susceptible to idiosyncratic biases that reflect the characteristics of the annotators. But models based on large, self-reported emotion data sets may overlook subtle, social emotions that human annotators can recognize. In seeking to establish a means to train emotion detection models so that they can achieve good performance in different contexts, the current study proposes a novel transformer transfer learning approach that parallels human development stages: (1) detect emotions reported by the texts' authors and (2) synchronize the model with social emotions identified in annotator-rated emotion data sets. The analysis, based on a large, novel, self-reported emotion data set (n = 3,654,544) and applied to 10 previously published data sets, shows that the transfer learning emotion model achieves relatively strong performance.
期刊介绍:
Neural Computing & Applications is an international journal which publishes original research and other information in the field of practical applications of neural computing and related techniques such as genetic algorithms, fuzzy logic and neuro-fuzzy systems.
All items relevant to building practical systems are within its scope, including but not limited to:
-adaptive computing-
algorithms-
applicable neural networks theory-
applied statistics-
architectures-
artificial intelligence-
benchmarks-
case histories of innovative applications-
fuzzy logic-
genetic algorithms-
hardware implementations-
hybrid intelligent systems-
intelligent agents-
intelligent control systems-
intelligent diagnostics-
intelligent forecasting-
machine learning-
neural networks-
neuro-fuzzy systems-
pattern recognition-
performance measures-
self-learning systems-
software simulations-
supervised and unsupervised learning methods-
system engineering and integration.
Featured contributions fall into several categories: Original Articles, Review Articles, Book Reviews and Announcements.