Pub Date : 2020-09-03DOI: 10.1109/BELIV51497.2020.00010
Alyxander Burns, Cindy Xiong, S. Franconeri, A. Cairo, Narges Mahyar
Understanding a visualization is a multi-level process. A reader must extract and extrapolate from numeric facts, understand how those facts apply to both the context of the data and other potential contexts, and draw or evaluate conclusions from the data. A well-designed visualization should support each of these levels of understanding. We diagnose levels of understanding of visualized data by adapting Bloom’s taxonomy, a common framework from the education literature. We describe each level of the framework and provide examples for how it can be applied to evaluate the efficacy of data visualizations along six levels of knowledge acquisition - knowledge, comprehension, application, analysis, synthesis, and evaluation. We present three case studies showing that this framework expands on existing methods to comprehensively measure how a visualization design facilitates a viewer’s understanding of visualizations. Although Bloom’s original taxonomy suggests a strong hierarchical structure for some domains, we found few examples of dependent relationships between performance at different levels for our three case studies. If this level-independence holds across new tested visualizations, the taxonomy could serve to inspire more targeted evaluations of levels of understanding that are relevant to a communication goal.
{"title":"How to evaluate data visualizations across different levels of understanding","authors":"Alyxander Burns, Cindy Xiong, S. Franconeri, A. Cairo, Narges Mahyar","doi":"10.1109/BELIV51497.2020.00010","DOIUrl":"https://doi.org/10.1109/BELIV51497.2020.00010","url":null,"abstract":"Understanding a visualization is a multi-level process. A reader must extract and extrapolate from numeric facts, understand how those facts apply to both the context of the data and other potential contexts, and draw or evaluate conclusions from the data. A well-designed visualization should support each of these levels of understanding. We diagnose levels of understanding of visualized data by adapting Bloom’s taxonomy, a common framework from the education literature. We describe each level of the framework and provide examples for how it can be applied to evaluate the efficacy of data visualizations along six levels of knowledge acquisition - knowledge, comprehension, application, analysis, synthesis, and evaluation. We present three case studies showing that this framework expands on existing methods to comprehensively measure how a visualization design facilitates a viewer’s understanding of visualizations. Although Bloom’s original taxonomy suggests a strong hierarchical structure for some domains, we found few examples of dependent relationships between performance at different levels for our three case studies. If this level-independence holds across new tested visualizations, the taxonomy could serve to inspire more targeted evaluations of levels of understanding that are relevant to a communication goal.","PeriodicalId":282674,"journal":{"name":"2020 IEEE Workshop on Evaluation and Beyond - Methodological Approaches to Visualization (BELIV)","volume":"177 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124392897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-03DOI: 10.1109/BELIV51497.2020.00016
Michael Oppermann, T. Munzner
We introduce the notion of a data-first design study which is triggered by the acquisition of real-world data instead of specific stakeholder analysis questions. We propose an adaptation of the design study methodology framework to provide practical guidance and to aid transferability to other data-first design processes. We discuss opportunities and risks by reflecting on two of our own data-first design studies. We review 64 previous design studies and identify 16 of them as edge cases with characteristics that may indicate a data-first design process in action.
{"title":"Data-First Visualization Design Studies","authors":"Michael Oppermann, T. Munzner","doi":"10.1109/BELIV51497.2020.00016","DOIUrl":"https://doi.org/10.1109/BELIV51497.2020.00016","url":null,"abstract":"We introduce the notion of a data-first design study which is triggered by the acquisition of real-world data instead of specific stakeholder analysis questions. We propose an adaptation of the design study methodology framework to provide practical guidance and to aid transferability to other data-first design processes. We discuss opportunities and risks by reflecting on two of our own data-first design studies. We review 64 previous design studies and identify 16 of them as edge cases with characteristics that may indicate a data-first design process in action.","PeriodicalId":282674,"journal":{"name":"2020 IEEE Workshop on Evaluation and Beyond - Methodological Approaches to Visualization (BELIV)","volume":"204 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131802515","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-02DOI: 10.1109/BELIV51497.2020.00012
Jeremy E. Block, E. Ragan
Many interactive data systems combine visual representations of data with embedded algorithmic support for automation and data exploration. To effectively support transparent and explainable data systems, it is important for researchers and designers to know how users understand the system. We discuss the evaluation of users’ mental models of system logic. Mental models are challenging to capture and analyze. While common evaluation methods aim to approximate the user’s final mental model after a period of system usage, user understanding continuously evolves as users interact with a system over time. In this paper, we review many common mental model measurement techniques, discuss tradeoffs, and recommend methods for deeper, more meaningful evaluation of mental models when using interactive data analysis and visualization systems. We present guidelines for evaluating mental models over time to help track the evolution of specific model updates and how they may map to the particular use of interface features and data queries. By asking users to describe what they know and how they know it, researchers can collect structured, time-ordered insight into a user’s conceptualization process while also helping guide users to their own discoveries.
{"title":"Micro-entries: Encouraging Deeper Evaluation of Mental Models Over Time for Interactive Data Systems","authors":"Jeremy E. Block, E. Ragan","doi":"10.1109/BELIV51497.2020.00012","DOIUrl":"https://doi.org/10.1109/BELIV51497.2020.00012","url":null,"abstract":"Many interactive data systems combine visual representations of data with embedded algorithmic support for automation and data exploration. To effectively support transparent and explainable data systems, it is important for researchers and designers to know how users understand the system. We discuss the evaluation of users’ mental models of system logic. Mental models are challenging to capture and analyze. While common evaluation methods aim to approximate the user’s final mental model after a period of system usage, user understanding continuously evolves as users interact with a system over time. In this paper, we review many common mental model measurement techniques, discuss tradeoffs, and recommend methods for deeper, more meaningful evaluation of mental models when using interactive data analysis and visualization systems. We present guidelines for evaluating mental models over time to help track the evolution of specific model updates and how they may map to the particular use of interface features and data queries. By asking users to describe what they know and how they know it, researchers can collect structured, time-ordered insight into a user’s conceptualization process while also helping guide users to their own discoveries.","PeriodicalId":282674,"journal":{"name":"2020 IEEE Workshop on Evaluation and Beyond - Methodological Approaches to Visualization (BELIV)","volume":"107 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116664916","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-25DOI: 10.1109/BELIV51497.2020.00013
M. Correll
We often point to the relative increase in the amount and sophistication of evaluations of visualization systems versus the earliest days of the field as evidence that we are maturing as a field. I am not so convinced. In particular, I feel that evaluations of visualizations, as they are ordinarily performed in the field or asked for by reviewers, fail to tell us very much that is useful or transferable about visualization systems, regardless of the statistical rigor or ecological validity of the evaluation. Through a series of thought experiments, I show how our current conceptions of visualization evaluations can be incomplete, capricious, or useless for the goal of furthering the field, more in line with the “heroic age” of medical science than the rigorous evidence-based field we might aspire to be. I conclude by suggesting that our models for designing evaluations, and our priorities as a field, should be revisited.
{"title":"What Do We Actually Learn from Evaluations in the “Heroic Era” of Visualization? : Position Paper","authors":"M. Correll","doi":"10.1109/BELIV51497.2020.00013","DOIUrl":"https://doi.org/10.1109/BELIV51497.2020.00013","url":null,"abstract":"We often point to the relative increase in the amount and sophistication of evaluations of visualization systems versus the earliest days of the field as evidence that we are maturing as a field. I am not so convinced. In particular, I feel that evaluations of visualizations, as they are ordinarily performed in the field or asked for by reviewers, fail to tell us very much that is useful or transferable about visualization systems, regardless of the statistical rigor or ecological validity of the evaluation. Through a series of thought experiments, I show how our current conceptions of visualization evaluations can be incomplete, capricious, or useless for the goal of furthering the field, more in line with the “heroic age” of medical science than the rigorous evidence-based field we might aspire to be. I conclude by suggesting that our models for designing evaluations, and our priorities as a field, should be revisited.","PeriodicalId":282674,"journal":{"name":"2020 IEEE Workshop on Evaluation and Beyond - Methodological Approaches to Visualization (BELIV)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132545603","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-19DOI: 10.1109/BELIV51497.2020.00014
Aditeya Pandey, Uzma Haque Syeda, M. Borkin
The effectiveness of a visualization technique is dependent on how well it supports the tasks or goals of an end-user. To measure the effectiveness of a visualization technique, researchers often use a comparative study design. In a comparative study, two or more visualization techniques are compared over a set of tasks and commonly measure human performance in terms of task accuracy and completion time. Despite the critical role of tasks in comparative studies, the current lack of guidance in existing literature on best practices for task selection and communication of research results in evaluation studies is problematic. In this work, we systematically identify and curate the task-based challenges of comparative studies by reviewing existing visualization literature on the topic. Furthermore, for each of the presented challenges we discuss the potential threats to validity for a comparative study. The challenges discussed in this paper are further backed by evidence identified in a detailed survey of comparative tree visualization studies. Finally, we recommend best practices from personal experience and the surveyed tree visualization studies to provide guidelines for other researchers to mitigate the challenges. The survey data and a free copy of the paper is available at https://osf.io/g3btk/
{"title":"Towards Identification and Mitigation of Task-Based Challenges in Comparative Visualization Studies","authors":"Aditeya Pandey, Uzma Haque Syeda, M. Borkin","doi":"10.1109/BELIV51497.2020.00014","DOIUrl":"https://doi.org/10.1109/BELIV51497.2020.00014","url":null,"abstract":"The effectiveness of a visualization technique is dependent on how well it supports the tasks or goals of an end-user. To measure the effectiveness of a visualization technique, researchers often use a comparative study design. In a comparative study, two or more visualization techniques are compared over a set of tasks and commonly measure human performance in terms of task accuracy and completion time. Despite the critical role of tasks in comparative studies, the current lack of guidance in existing literature on best practices for task selection and communication of research results in evaluation studies is problematic. In this work, we systematically identify and curate the task-based challenges of comparative studies by reviewing existing visualization literature on the topic. Furthermore, for each of the presented challenges we discuss the potential threats to validity for a comparative study. The challenges discussed in this paper are further backed by evidence identified in a detailed survey of comparative tree visualization studies. Finally, we recommend best practices from personal experience and the surveyed tree visualization studies to provide guidelines for other researchers to mitigate the challenges. The survey data and a free copy of the paper is available at https://osf.io/g3btk/","PeriodicalId":282674,"journal":{"name":"2020 IEEE Workshop on Evaluation and Beyond - Methodological Approaches to Visualization (BELIV)","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126635730","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}