Testing the Test: Observations When Assessing Visualization Literacy of Domain Experts

Seyda Öney, Moataz Abdelaal, Kuno Kurzhals, Paul Betz, Cordula Kropp, Daniel Weiskopf
{"title":"Testing the Test: Observations When Assessing Visualization Literacy of Domain Experts","authors":"Seyda Öney, Moataz Abdelaal, Kuno Kurzhals, Paul Betz, Cordula Kropp, Daniel Weiskopf","doi":"arxiv-2409.08101","DOIUrl":null,"url":null,"abstract":"Various standardized tests exist that assess individuals' visualization\nliteracy. Their use can help to draw conclusions from studies. However, it is\nnot taken into account that the test itself can create a pressure situation\nwhere participants might fear being exposed and assessed negatively. This is\nespecially problematic when testing domain experts in design studies. We\nconducted interviews with experts from different domains performing the\nMini-VLAT test for visualization literacy to identify potential problems. Our\nparticipants reported that the time limit per question, ambiguities in the\nquestions and visualizations, and missing steps in the test procedure mainly\nhad an impact on their performance and content. We discuss possible changes to\nthe test design to address these issues and how such assessment methods could\nbe integrated into existing evaluation procedures.","PeriodicalId":501541,"journal":{"name":"arXiv - CS - Human-Computer Interaction","volume":"36 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Human-Computer Interaction","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.08101","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Various standardized tests exist that assess individuals' visualization literacy. Their use can help to draw conclusions from studies. However, it is not taken into account that the test itself can create a pressure situation where participants might fear being exposed and assessed negatively. This is especially problematic when testing domain experts in design studies. We conducted interviews with experts from different domains performing the Mini-VLAT test for visualization literacy to identify potential problems. Our participants reported that the time limit per question, ambiguities in the questions and visualizations, and missing steps in the test procedure mainly had an impact on their performance and content. We discuss possible changes to the test design to address these issues and how such assessment methods could be integrated into existing evaluation procedures.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
测试测试:评估领域专家可视化素养时的观察结果
现有各种标准化测试可以评估个人的可视化素养。使用这些测试有助于从研究中得出结论。然而,人们并没有考虑到,测试本身可能会给参与者造成压力,他们可能会害怕暴露自己并受到负面评价。在设计研究中对领域专家进行测试时,问题尤其突出。我们对进行可视化素养迷你 VLAT 测试的不同领域的专家进行了访谈,以找出潜在的问题。我们的参与者报告说,每个问题的时间限制、问题和可视化中的歧义以及测试程序中的步骤缺失主要影响了他们的表现和内容。我们讨论了为解决这些问题而可能对测试设计进行的修改,以及如何将此类评估方法纳入现有的评估程序。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Equimetrics -- Applying HAR principles to equestrian activities AI paintings vs. Human Paintings? Deciphering Public Interactions and Perceptions towards AI-Generated Paintings on TikTok From Data Stories to Dialogues: A Randomised Controlled Trial of Generative AI Agents and Data Storytelling in Enhancing Data Visualisation Comprehension Exploring Gaze Pattern in Autistic Children: Clustering, Visualization, and Prediction Revealing the Challenge of Detecting Character Knowledge Errors in LLM Role-Playing
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1