首页 > 最新文献

2020 IEEE Workshop on Evaluation and Beyond - Methodological Approaches to Visualization (BELIV)最新文献

英文 中文
How to evaluate data visualizations across different levels of understanding 如何在不同的理解水平上评估数据可视化
Alyxander Burns, Cindy Xiong, S. Franconeri, A. Cairo, Narges Mahyar
Understanding a visualization is a multi-level process. A reader must extract and extrapolate from numeric facts, understand how those facts apply to both the context of the data and other potential contexts, and draw or evaluate conclusions from the data. A well-designed visualization should support each of these levels of understanding. We diagnose levels of understanding of visualized data by adapting Bloom’s taxonomy, a common framework from the education literature. We describe each level of the framework and provide examples for how it can be applied to evaluate the efficacy of data visualizations along six levels of knowledge acquisition - knowledge, comprehension, application, analysis, synthesis, and evaluation. We present three case studies showing that this framework expands on existing methods to comprehensively measure how a visualization design facilitates a viewer’s understanding of visualizations. Although Bloom’s original taxonomy suggests a strong hierarchical structure for some domains, we found few examples of dependent relationships between performance at different levels for our three case studies. If this level-independence holds across new tested visualizations, the taxonomy could serve to inspire more targeted evaluations of levels of understanding that are relevant to a communication goal.
理解可视化是一个多层次的过程。读者必须从数字事实中提取和推断,理解这些事实如何应用于数据的上下文和其他潜在的上下文,并从数据中得出或评估结论。一个设计良好的可视化应该支持每一个层次的理解。我们通过调整Bloom的分类法(一种来自教育文献的通用框架)来诊断对可视化数据的理解水平。我们描述了框架的每个层次,并提供了如何应用它来评估数据可视化的功效的示例,这些知识获取分为六个层次:知识、理解、应用、分析、综合和评估。我们提出了三个案例研究,表明该框架扩展了现有的方法,以全面衡量可视化设计如何促进观看者对可视化的理解。尽管Bloom的原始分类法表明某些领域有很强的层次结构,但在我们的三个案例研究中,我们发现很少有不同级别的性能之间存在依赖关系的例子。如果这种级别独立性在经过测试的新可视化中都成立,那么分类法就可以激发对与通信目标相关的理解级别进行更有针对性的评估。
{"title":"How to evaluate data visualizations across different levels of understanding","authors":"Alyxander Burns, Cindy Xiong, S. Franconeri, A. Cairo, Narges Mahyar","doi":"10.1109/BELIV51497.2020.00010","DOIUrl":"https://doi.org/10.1109/BELIV51497.2020.00010","url":null,"abstract":"Understanding a visualization is a multi-level process. A reader must extract and extrapolate from numeric facts, understand how those facts apply to both the context of the data and other potential contexts, and draw or evaluate conclusions from the data. A well-designed visualization should support each of these levels of understanding. We diagnose levels of understanding of visualized data by adapting Bloom’s taxonomy, a common framework from the education literature. We describe each level of the framework and provide examples for how it can be applied to evaluate the efficacy of data visualizations along six levels of knowledge acquisition - knowledge, comprehension, application, analysis, synthesis, and evaluation. We present three case studies showing that this framework expands on existing methods to comprehensively measure how a visualization design facilitates a viewer’s understanding of visualizations. Although Bloom’s original taxonomy suggests a strong hierarchical structure for some domains, we found few examples of dependent relationships between performance at different levels for our three case studies. If this level-independence holds across new tested visualizations, the taxonomy could serve to inspire more targeted evaluations of levels of understanding that are relevant to a communication goal.","PeriodicalId":282674,"journal":{"name":"2020 IEEE Workshop on Evaluation and Beyond - Methodological Approaches to Visualization (BELIV)","volume":"177 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124392897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
Data-First Visualization Design Studies 数据优先的可视化设计研究
Michael Oppermann, T. Munzner
We introduce the notion of a data-first design study which is triggered by the acquisition of real-world data instead of specific stakeholder analysis questions. We propose an adaptation of the design study methodology framework to provide practical guidance and to aid transferability to other data-first design processes. We discuss opportunities and risks by reflecting on two of our own data-first design studies. We review 64 previous design studies and identify 16 of them as edge cases with characteristics that may indicate a data-first design process in action.
我们引入了数据优先设计研究的概念,它是通过获取真实世界的数据而不是特定的利益相关者分析问题来触发的。我们建议对设计研究方法框架进行调整,以提供实际指导,并帮助转移到其他数据优先的设计过程。我们通过反思我们自己的两项数据优先设计研究来讨论机遇和风险。我们回顾了64个先前的设计研究,并确定其中16个为边缘案例,其特征可能表明数据优先的设计过程在起作用。
{"title":"Data-First Visualization Design Studies","authors":"Michael Oppermann, T. Munzner","doi":"10.1109/BELIV51497.2020.00016","DOIUrl":"https://doi.org/10.1109/BELIV51497.2020.00016","url":null,"abstract":"We introduce the notion of a data-first design study which is triggered by the acquisition of real-world data instead of specific stakeholder analysis questions. We propose an adaptation of the design study methodology framework to provide practical guidance and to aid transferability to other data-first design processes. We discuss opportunities and risks by reflecting on two of our own data-first design studies. We review 64 previous design studies and identify 16 of them as edge cases with characteristics that may indicate a data-first design process in action.","PeriodicalId":282674,"journal":{"name":"2020 IEEE Workshop on Evaluation and Beyond - Methodological Approaches to Visualization (BELIV)","volume":"204 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131802515","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Micro-entries: Encouraging Deeper Evaluation of Mental Models Over Time for Interactive Data Systems 微条目:随着时间的推移鼓励对交互数据系统的心理模型进行更深入的评估
Jeremy E. Block, E. Ragan
Many interactive data systems combine visual representations of data with embedded algorithmic support for automation and data exploration. To effectively support transparent and explainable data systems, it is important for researchers and designers to know how users understand the system. We discuss the evaluation of users’ mental models of system logic. Mental models are challenging to capture and analyze. While common evaluation methods aim to approximate the user’s final mental model after a period of system usage, user understanding continuously evolves as users interact with a system over time. In this paper, we review many common mental model measurement techniques, discuss tradeoffs, and recommend methods for deeper, more meaningful evaluation of mental models when using interactive data analysis and visualization systems. We present guidelines for evaluating mental models over time to help track the evolution of specific model updates and how they may map to the particular use of interface features and data queries. By asking users to describe what they know and how they know it, researchers can collect structured, time-ordered insight into a user’s conceptualization process while also helping guide users to their own discoveries.
许多交互式数据系统将数据的可视化表示与支持自动化和数据探索的嵌入式算法相结合。为了有效地支持透明和可解释的数据系统,研究人员和设计人员必须了解用户如何理解系统。我们讨论了系统逻辑的用户心智模型的评价。捕捉和分析心理模型是一项挑战。虽然常见的评估方法旨在接近用户在使用系统一段时间后的最终心智模型,但随着用户与系统的交互时间的推移,用户的理解也在不断发展。在本文中,我们回顾了许多常见的心理模型测量技术,讨论了权衡,并推荐了在使用交互式数据分析和可视化系统时更深入,更有意义的心理模型评估方法。我们提出了评估心理模型随时间变化的指导方针,以帮助跟踪特定模型更新的演变,以及它们如何映射到接口功能和数据查询的特定使用。通过要求用户描述他们知道什么以及他们是如何知道的,研究人员可以收集结构化的、按时间顺序排列的洞察力,了解用户的概念化过程,同时也帮助指导用户找到自己的发现。
{"title":"Micro-entries: Encouraging Deeper Evaluation of Mental Models Over Time for Interactive Data Systems","authors":"Jeremy E. Block, E. Ragan","doi":"10.1109/BELIV51497.2020.00012","DOIUrl":"https://doi.org/10.1109/BELIV51497.2020.00012","url":null,"abstract":"Many interactive data systems combine visual representations of data with embedded algorithmic support for automation and data exploration. To effectively support transparent and explainable data systems, it is important for researchers and designers to know how users understand the system. We discuss the evaluation of users’ mental models of system logic. Mental models are challenging to capture and analyze. While common evaluation methods aim to approximate the user’s final mental model after a period of system usage, user understanding continuously evolves as users interact with a system over time. In this paper, we review many common mental model measurement techniques, discuss tradeoffs, and recommend methods for deeper, more meaningful evaluation of mental models when using interactive data analysis and visualization systems. We present guidelines for evaluating mental models over time to help track the evolution of specific model updates and how they may map to the particular use of interface features and data queries. By asking users to describe what they know and how they know it, researchers can collect structured, time-ordered insight into a user’s conceptualization process while also helping guide users to their own discoveries.","PeriodicalId":282674,"journal":{"name":"2020 IEEE Workshop on Evaluation and Beyond - Methodological Approaches to Visualization (BELIV)","volume":"107 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116664916","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
What Do We Actually Learn from Evaluations in the “Heroic Era” of Visualization? : Position Paper 我们究竟从可视化“英雄时代”的评价中学到了什么?:立场文件
M. Correll
We often point to the relative increase in the amount and sophistication of evaluations of visualization systems versus the earliest days of the field as evidence that we are maturing as a field. I am not so convinced. In particular, I feel that evaluations of visualizations, as they are ordinarily performed in the field or asked for by reviewers, fail to tell us very much that is useful or transferable about visualization systems, regardless of the statistical rigor or ecological validity of the evaluation. Through a series of thought experiments, I show how our current conceptions of visualization evaluations can be incomplete, capricious, or useless for the goal of furthering the field, more in line with the “heroic age” of medical science than the rigorous evidence-based field we might aspire to be. I conclude by suggesting that our models for designing evaluations, and our priorities as a field, should be revisited.
我们经常指出可视化系统评估的数量和复杂性相对于该领域早期的增长,作为我们作为一个领域正在成熟的证据。我可不这么认为。特别是,我觉得对可视化的评估,因为它们通常是在现场进行的,或者是由审稿人要求的,无论评估的统计严谨性或生态有效性如何,都不能告诉我们可视化系统的有用或可转移性。通过一系列的思想实验,我展示了我们目前对可视化评估的概念是如何不完整的、反复无常的,或者对于推进该领域的目标是无用的,它更符合医学科学的“英雄时代”,而不是我们可能渴望成为的严格的循证领域。最后,我建议我们设计评估的模型,以及我们作为一个领域的优先事项,应该重新审视。
{"title":"What Do We Actually Learn from Evaluations in the “Heroic Era” of Visualization? : Position Paper","authors":"M. Correll","doi":"10.1109/BELIV51497.2020.00013","DOIUrl":"https://doi.org/10.1109/BELIV51497.2020.00013","url":null,"abstract":"We often point to the relative increase in the amount and sophistication of evaluations of visualization systems versus the earliest days of the field as evidence that we are maturing as a field. I am not so convinced. In particular, I feel that evaluations of visualizations, as they are ordinarily performed in the field or asked for by reviewers, fail to tell us very much that is useful or transferable about visualization systems, regardless of the statistical rigor or ecological validity of the evaluation. Through a series of thought experiments, I show how our current conceptions of visualization evaluations can be incomplete, capricious, or useless for the goal of furthering the field, more in line with the “heroic age” of medical science than the rigorous evidence-based field we might aspire to be. I conclude by suggesting that our models for designing evaluations, and our priorities as a field, should be revisited.","PeriodicalId":282674,"journal":{"name":"2020 IEEE Workshop on Evaluation and Beyond - Methodological Approaches to Visualization (BELIV)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132545603","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Towards Identification and Mitigation of Task-Based Challenges in Comparative Visualization Studies 比较可视化研究中任务挑战的识别与缓解
Aditeya Pandey, Uzma Haque Syeda, M. Borkin
The effectiveness of a visualization technique is dependent on how well it supports the tasks or goals of an end-user. To measure the effectiveness of a visualization technique, researchers often use a comparative study design. In a comparative study, two or more visualization techniques are compared over a set of tasks and commonly measure human performance in terms of task accuracy and completion time. Despite the critical role of tasks in comparative studies, the current lack of guidance in existing literature on best practices for task selection and communication of research results in evaluation studies is problematic. In this work, we systematically identify and curate the task-based challenges of comparative studies by reviewing existing visualization literature on the topic. Furthermore, for each of the presented challenges we discuss the potential threats to validity for a comparative study. The challenges discussed in this paper are further backed by evidence identified in a detailed survey of comparative tree visualization studies. Finally, we recommend best practices from personal experience and the surveyed tree visualization studies to provide guidelines for other researchers to mitigate the challenges. The survey data and a free copy of the paper is available at https://osf.io/g3btk/
可视化技术的有效性取决于它对最终用户的任务或目标的支持程度。为了测量可视化技术的有效性,研究人员经常使用比较研究设计。在比较研究中,在一组任务上比较两种或多种可视化技术,通常根据任务准确性和完成时间来衡量人类的表现。尽管任务在比较研究中发挥着关键作用,但目前在现有文献中缺乏关于评估研究中任务选择和研究结果交流的最佳实践的指导是有问题的。在这项工作中,我们通过回顾关于该主题的现有可视化文献,系统地识别和策划比较研究中基于任务的挑战。此外,对于每个提出的挑战,我们讨论了对比较研究有效性的潜在威胁。在比较树可视化研究的详细调查中确定的证据进一步支持了本文中讨论的挑战。最后,我们从个人经验和调查的树可视化研究中推荐最佳实践,为其他研究人员提供指导,以减轻挑战。调查数据和论文的免费副本可在https://osf.io/g3btk/上获得
{"title":"Towards Identification and Mitigation of Task-Based Challenges in Comparative Visualization Studies","authors":"Aditeya Pandey, Uzma Haque Syeda, M. Borkin","doi":"10.1109/BELIV51497.2020.00014","DOIUrl":"https://doi.org/10.1109/BELIV51497.2020.00014","url":null,"abstract":"The effectiveness of a visualization technique is dependent on how well it supports the tasks or goals of an end-user. To measure the effectiveness of a visualization technique, researchers often use a comparative study design. In a comparative study, two or more visualization techniques are compared over a set of tasks and commonly measure human performance in terms of task accuracy and completion time. Despite the critical role of tasks in comparative studies, the current lack of guidance in existing literature on best practices for task selection and communication of research results in evaluation studies is problematic. In this work, we systematically identify and curate the task-based challenges of comparative studies by reviewing existing visualization literature on the topic. Furthermore, for each of the presented challenges we discuss the potential threats to validity for a comparative study. The challenges discussed in this paper are further backed by evidence identified in a detailed survey of comparative tree visualization studies. Finally, we recommend best practices from personal experience and the surveyed tree visualization studies to provide guidelines for other researchers to mitigate the challenges. The survey data and a free copy of the paper is available at https://osf.io/g3btk/","PeriodicalId":282674,"journal":{"name":"2020 IEEE Workshop on Evaluation and Beyond - Methodological Approaches to Visualization (BELIV)","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126635730","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
期刊
2020 IEEE Workshop on Evaluation and Beyond - Methodological Approaches to Visualization (BELIV)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1