用户体验(UX)研究中的测量实践:系统性定量文献综述

IF 2.4 Q3 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Frontiers in Computer Science Pub Date : 2024-03-04 DOI:10.3389/fcomp.2024.1368860
S. Perrig, L. Aeschbach, Nicolas Scharowski, Nick von Felten, Klaus Opwis, Florian Brühlmann
{"title":"用户体验(UX)研究中的测量实践:系统性定量文献综述","authors":"S. Perrig, L. Aeschbach, Nicolas Scharowski, Nick von Felten, Klaus Opwis, Florian Brühlmann","doi":"10.3389/fcomp.2024.1368860","DOIUrl":null,"url":null,"abstract":"User experience (UX) research relies heavily on survey scales to measure users' subjective experiences with technology. However, repeatedly raised concerns regarding the improper use of survey scales in UX research and adjacent fields call for a systematic review of current measurement practices. Therefore, we conducted a systematic literature review, screening 153 papers from four years of the ACM Conference on Human Factors in Computing Systems proceedings (ACM CHI 2019 to 2022), of which 60 were eligible empirical studies using survey scales to study users' experiences. We identified 85 different scales and 172 distinct constructs measured. Most scales were used once (70.59%), and most constructs were measured only once (66.28%). The System Usability Scale was the most popular scale, followed by the User Experience Questionnaire, and the NASA Task Load Index. Regarding constructs, usability was the most frequently measured, followed by attractiveness, effort, and presence. Furthermore, results show that papers rarely contained complete rationales for scale selection (20.00%) and seldom provided all scale items used (30.00%). More than a third of all scales were adapted (34.19%), while only one-third of papers reported any scale quality investigation (36.67%). On the basis of our results, we highlight questionable measurement practices in UX research and suggest opportunities to improve scale use for UX-related constructs. Additionally, we provide six recommended steps to promote enhanced rigor in following best practices for scale-based UX research.","PeriodicalId":52823,"journal":{"name":"Frontiers in Computer Science","volume":null,"pages":null},"PeriodicalIF":2.4000,"publicationDate":"2024-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Measurement practices in user experience (UX) research: a systematic quantitative literature review\",\"authors\":\"S. Perrig, L. Aeschbach, Nicolas Scharowski, Nick von Felten, Klaus Opwis, Florian Brühlmann\",\"doi\":\"10.3389/fcomp.2024.1368860\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"User experience (UX) research relies heavily on survey scales to measure users' subjective experiences with technology. However, repeatedly raised concerns regarding the improper use of survey scales in UX research and adjacent fields call for a systematic review of current measurement practices. Therefore, we conducted a systematic literature review, screening 153 papers from four years of the ACM Conference on Human Factors in Computing Systems proceedings (ACM CHI 2019 to 2022), of which 60 were eligible empirical studies using survey scales to study users' experiences. We identified 85 different scales and 172 distinct constructs measured. Most scales were used once (70.59%), and most constructs were measured only once (66.28%). The System Usability Scale was the most popular scale, followed by the User Experience Questionnaire, and the NASA Task Load Index. Regarding constructs, usability was the most frequently measured, followed by attractiveness, effort, and presence. Furthermore, results show that papers rarely contained complete rationales for scale selection (20.00%) and seldom provided all scale items used (30.00%). More than a third of all scales were adapted (34.19%), while only one-third of papers reported any scale quality investigation (36.67%). On the basis of our results, we highlight questionable measurement practices in UX research and suggest opportunities to improve scale use for UX-related constructs. Additionally, we provide six recommended steps to promote enhanced rigor in following best practices for scale-based UX research.\",\"PeriodicalId\":52823,\"journal\":{\"name\":\"Frontiers in Computer Science\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":2.4000,\"publicationDate\":\"2024-03-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Frontiers in Computer Science\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.3389/fcomp.2024.1368860\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Frontiers in Computer Science","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3389/fcomp.2024.1368860","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
引用次数: 0

摘要

用户体验(UX)研究在很大程度上依赖于调查量表来测量用户对技术的主观体验。然而,在用户体验研究及邻近领域中,调查量表的不当使用一再引起人们的关注,这就要求我们对当前的测量实践进行系统的回顾。因此,我们进行了一次系统的文献综述,筛选了 ACM 计算系统中人的因素会议(ACM CHI 2019 至 2022 年)四年论文集中的 153 篇论文,其中 60 篇是使用调查量表研究用户体验的合格实证研究。我们确定了 85 种不同的量表和 172 个不同的测量结构。大多数量表只使用过一次(70.59%),大多数结构只测量过一次(66.28%)。系统可用性量表是最常用的量表,其次是用户体验问卷和 NASA 任务负荷指数。在结构方面,可用性是最常测量的,其次是吸引力、努力程度和存在感。此外,研究结果表明,论文很少包含完整的量表选择理由(20.00%),也很少提供所有使用过的量表项目(30.00%)。超过三分之一的量表是经过改编的(34.19%),而只有三分之一的论文报告了任何量表质量调查(36.67%)。根据我们的研究结果,我们强调了用户体验研究中值得商榷的测量实践,并提出了改进用户体验相关结构的量表使用的机会。此外,我们还提供了六个建议步骤,以促进在遵循基于量表的用户体验研究最佳实践时提高严谨性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Measurement practices in user experience (UX) research: a systematic quantitative literature review
User experience (UX) research relies heavily on survey scales to measure users' subjective experiences with technology. However, repeatedly raised concerns regarding the improper use of survey scales in UX research and adjacent fields call for a systematic review of current measurement practices. Therefore, we conducted a systematic literature review, screening 153 papers from four years of the ACM Conference on Human Factors in Computing Systems proceedings (ACM CHI 2019 to 2022), of which 60 were eligible empirical studies using survey scales to study users' experiences. We identified 85 different scales and 172 distinct constructs measured. Most scales were used once (70.59%), and most constructs were measured only once (66.28%). The System Usability Scale was the most popular scale, followed by the User Experience Questionnaire, and the NASA Task Load Index. Regarding constructs, usability was the most frequently measured, followed by attractiveness, effort, and presence. Furthermore, results show that papers rarely contained complete rationales for scale selection (20.00%) and seldom provided all scale items used (30.00%). More than a third of all scales were adapted (34.19%), while only one-third of papers reported any scale quality investigation (36.67%). On the basis of our results, we highlight questionable measurement practices in UX research and suggest opportunities to improve scale use for UX-related constructs. Additionally, we provide six recommended steps to promote enhanced rigor in following best practices for scale-based UX research.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Frontiers in Computer Science
Frontiers in Computer Science COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS-
CiteScore
4.30
自引率
0.00%
发文量
152
审稿时长
13 weeks
期刊最新文献
A Support Vector Machine based approach for plagiarism detection in Python code submissions in undergraduate settings Working with agile and crowd: human factors identified from the industry Energy-efficient, low-latency, and non-contact eye blink detection with capacitive sensing Experimenting with D-Wave quantum annealers on prime factorization problems Fuzzy Markov model for the reliability analysis of hybrid microgrids
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1