Validating item response processes in digital competence assessment through eye-tracking techniques

Juan Bartolomé, P. Garaizar, Leire Bastida
{"title":"Validating item response processes in digital competence assessment through eye-tracking techniques","authors":"Juan Bartolomé, P. Garaizar, Leire Bastida","doi":"10.1145/3434780.3436641","DOIUrl":null,"url":null,"abstract":"This paper reports on an exploratory study with the aim to validate item response processes in digital competence assessment through eye-tracking techniques. When measuring complex cognitive constructs, it is crucial to correctly design the evaluation items to trigger the intended knowledge and skills. Furthermore, to assess the validity of a test requires considering not only the content of the evaluation tasks involved in the test, but also whether examinees respond to the tasks by engaging construct-relevant response processes. The eye tracking observations helped to fill an ‘explanatory gap’ by providing data on variation in item response processes that are not captured by other sources of process data such as think aloud protocols or computer-generated log files. We proposed a set of metrics that could help test designers to validate the different item formats used in the evaluation of digital competence. The gaze data provided detailed information on test item response strategies, enabling profiling of examinee engagement and response processes associated with successful performance. There were notable differences between the participants who correctly solved the tasks and those who failed, in terms of the time spent on solving them, as well as the data on their gazes. Moreover, this included insights into response processes which contributed to the validation of the assessment criteria of each item.","PeriodicalId":430095,"journal":{"name":"Eighth International Conference on Technological Ecosystems for Enhancing Multiculturality","volume":"109 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Eighth International Conference on Technological Ecosystems for Enhancing Multiculturality","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3434780.3436641","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

Abstract

This paper reports on an exploratory study with the aim to validate item response processes in digital competence assessment through eye-tracking techniques. When measuring complex cognitive constructs, it is crucial to correctly design the evaluation items to trigger the intended knowledge and skills. Furthermore, to assess the validity of a test requires considering not only the content of the evaluation tasks involved in the test, but also whether examinees respond to the tasks by engaging construct-relevant response processes. The eye tracking observations helped to fill an ‘explanatory gap’ by providing data on variation in item response processes that are not captured by other sources of process data such as think aloud protocols or computer-generated log files. We proposed a set of metrics that could help test designers to validate the different item formats used in the evaluation of digital competence. The gaze data provided detailed information on test item response strategies, enabling profiling of examinee engagement and response processes associated with successful performance. There were notable differences between the participants who correctly solved the tasks and those who failed, in terms of the time spent on solving them, as well as the data on their gazes. Moreover, this included insights into response processes which contributed to the validation of the assessment criteria of each item.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
通过眼动追踪技术验证数字能力评估中的项目反应过程
本文报道了一项探索性研究,旨在通过眼动追踪技术验证数字能力评估中的项目反应过程。在测量复杂的认知结构时,正确设计评估项目以触发预期的知识和技能是至关重要的。此外,评估测试的效度不仅需要考虑测试所涉及的评估任务的内容,还需要考虑考生是否通过参与建构相关的反应过程来回应任务。眼动追踪观察有助于填补一个“解释性空白”,因为它提供了项目反应过程变化的数据,而这些数据是其他过程数据来源(如大声思考协议或计算机生成的日志文件)无法捕捉到的。我们提出了一组指标,可以帮助测试设计师验证在评估数字能力时使用的不同项目格式。凝视数据提供了测试项目反应策略的详细信息,使考生参与和反应过程与成功的表现相关的分析成为可能。在解决问题所花费的时间以及他们注视的数据方面,正确解决问题的参与者和失败的参与者之间存在显著差异。此外,这包括对反应过程的见解,这有助于验证每个项目的评估标准。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Design of an auricular prosthesis for patients with grade III or IV microtia Gamifying Teacher Students’ Learning Platform: Information and Communication Technology in Teacher Education courses Learning Analytics: A Time to Shine Education for Sustainable Development and Climate Change: Pedagogical study of the social movement Fridays For Future Salamanca Validation of the K-Social-C questionnaire for measuring the Social Construction of Knowledge from Open Innovation in Social Innovation Laboratories: Instrument Validation
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1