以人为本的众包标注设计,提高主动学习模型的性能

IF 1.8 4区 管理学 Q3 COMPUTER SCIENCE, INFORMATION SYSTEMS Journal of Information Science Pub Date : 2023-10-31 DOI:10.1177/01655515231204802
Jing Dong, Yangyang Kang, Jiawei Liu, Changlong Sun, Shu Fan, Huchong Jin, Dan Wu, Zhuoren Jiang, Xi Niu, Xiaozhong Liu
{"title":"以人为本的众包标注设计,提高主动学习模型的性能","authors":"Jing Dong, Yangyang Kang, Jiawei Liu, Changlong Sun, Shu Fan, Huchong Jin, Dan Wu, Zhuoren Jiang, Xi Niu, Xiaozhong Liu","doi":"10.1177/01655515231204802","DOIUrl":null,"url":null,"abstract":"Active learning in machine learning is an effective approach to reducing the cost of human efforts for generating labels. The iterative process of active learning involves a human annotation step, during which crowdsourcing could be leveraged. It is essential for organisations adopting the active learning method to obtain a high model performance. This study aims to identify effective crowdsourcing interaction designs to promote the quality of human annotations and therefore the natural language processing (NLP)-based machine learning model performance. Specifically, the study experimented with four human-centred design techniques: highlight, guidelines, validation and text amount. Based on different combinations of the four design elements, the study developed 15 different annotation interfaces and recruited crowd workers to annotate texts with these interfaces. Annotated data under different designs were used separately to iteratively train a machine learning model. The results show that the design techniques of highlight and guideline play an essential role in improving the quality of human labels and therefore the performance of active learning models, while the impact of validation and text amount on model performance can be either positive in some cases or negative in other cases. The ‘simple’ designs (i.e. D1, D2, D7 and D14) with a few design techniques contribute to the top performance of models. The results provide practical implications to inspire the design of a crowdsourcing labelling system used for active learning.","PeriodicalId":54796,"journal":{"name":"Journal of Information Science","volume":"1 1","pages":"0"},"PeriodicalIF":1.8000,"publicationDate":"2023-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Human-centred design on crowdsourcing annotation towards improving active learning model performance\",\"authors\":\"Jing Dong, Yangyang Kang, Jiawei Liu, Changlong Sun, Shu Fan, Huchong Jin, Dan Wu, Zhuoren Jiang, Xi Niu, Xiaozhong Liu\",\"doi\":\"10.1177/01655515231204802\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Active learning in machine learning is an effective approach to reducing the cost of human efforts for generating labels. The iterative process of active learning involves a human annotation step, during which crowdsourcing could be leveraged. It is essential for organisations adopting the active learning method to obtain a high model performance. This study aims to identify effective crowdsourcing interaction designs to promote the quality of human annotations and therefore the natural language processing (NLP)-based machine learning model performance. Specifically, the study experimented with four human-centred design techniques: highlight, guidelines, validation and text amount. Based on different combinations of the four design elements, the study developed 15 different annotation interfaces and recruited crowd workers to annotate texts with these interfaces. Annotated data under different designs were used separately to iteratively train a machine learning model. The results show that the design techniques of highlight and guideline play an essential role in improving the quality of human labels and therefore the performance of active learning models, while the impact of validation and text amount on model performance can be either positive in some cases or negative in other cases. The ‘simple’ designs (i.e. D1, D2, D7 and D14) with a few design techniques contribute to the top performance of models. The results provide practical implications to inspire the design of a crowdsourcing labelling system used for active learning.\",\"PeriodicalId\":54796,\"journal\":{\"name\":\"Journal of Information Science\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":1.8000,\"publicationDate\":\"2023-10-31\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Information Science\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1177/01655515231204802\",\"RegionNum\":4,\"RegionCategory\":\"管理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Information Science","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1177/01655515231204802","RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

摘要

机器学习中的主动学习是一种有效的方法,可以减少人工生成标签的成本。主动学习的迭代过程包括人工注释步骤,在此过程中可以利用众包。对于采用主动学习方法的组织来说,获得高模型绩效是必不可少的。本研究旨在确定有效的众包交互设计,以提高人类注释的质量,从而提高基于自然语言处理(NLP)的机器学习模型的性能。具体来说,该研究试验了四种以人为中心的设计技术:突出显示、指导方针、验证和文本数量。基于这四个设计元素的不同组合,本研究开发了15种不同的标注界面,并招募人群工作者使用这些界面对文本进行标注。分别使用不同设计下的标注数据迭代训练机器学习模型。结果表明,亮点和指南的设计技术在提高人类标签的质量从而提高主动学习模型的性能方面起着至关重要的作用,而验证和文本量对模型性能的影响可能是积极的,也可能是消极的。“简单”的设计(即D1, D2, D7和D14)与一些设计技术有助于模型的顶级性能。研究结果为设计用于主动学习的众包标签系统提供了实际意义。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Human-centred design on crowdsourcing annotation towards improving active learning model performance
Active learning in machine learning is an effective approach to reducing the cost of human efforts for generating labels. The iterative process of active learning involves a human annotation step, during which crowdsourcing could be leveraged. It is essential for organisations adopting the active learning method to obtain a high model performance. This study aims to identify effective crowdsourcing interaction designs to promote the quality of human annotations and therefore the natural language processing (NLP)-based machine learning model performance. Specifically, the study experimented with four human-centred design techniques: highlight, guidelines, validation and text amount. Based on different combinations of the four design elements, the study developed 15 different annotation interfaces and recruited crowd workers to annotate texts with these interfaces. Annotated data under different designs were used separately to iteratively train a machine learning model. The results show that the design techniques of highlight and guideline play an essential role in improving the quality of human labels and therefore the performance of active learning models, while the impact of validation and text amount on model performance can be either positive in some cases or negative in other cases. The ‘simple’ designs (i.e. D1, D2, D7 and D14) with a few design techniques contribute to the top performance of models. The results provide practical implications to inspire the design of a crowdsourcing labelling system used for active learning.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Journal of Information Science
Journal of Information Science 工程技术-计算机:信息系统
CiteScore
6.80
自引率
8.30%
发文量
121
审稿时长
4 months
期刊介绍: The Journal of Information Science is a peer-reviewed international journal of high repute covering topics of interest to all those researching and working in the sciences of information and knowledge management. The Editors welcome material on any aspect of information science theory, policy, application or practice that will advance thinking in the field.
期刊最新文献
Government chatbot: Empowering smart conversations with enhanced contextual understanding and reasoning Knowing within multispecies families: An information experience study How are global university rankings adjusted for erroneous science, fraud and misconduct? Posterior reduction or adjustment in rankings in response to retractions and invalidation of scientific findings Predicting the technological impact of papers: Exploring optimal models and most important features Cross-domain corpus selection for cold-start context
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1