We describe the evolution of the IEEE Visual Analytics Science and Technology (VAST) Challenge from its origin in 2006 to present (2012). The VAST Challenge has provided an opportunity for visual analytics researchers to test their innovative thoughts on approaching problems in a wide range of subject domains against realistic datasets and problem scenarios. Over time, the Challenge has changed to correspond to the needs of researchers and users. We describe those changes and the impacts they have had on topics selected, data and questions offered, submissions received, and the Challenge format.
{"title":"A reflection on seven years of the VAST challenge","authors":"J. Scholtz, M. Whiting, C. Plaisant, G. Grinstein","doi":"10.1145/2442576.2442589","DOIUrl":"https://doi.org/10.1145/2442576.2442589","url":null,"abstract":"We describe the evolution of the IEEE Visual Analytics Science and Technology (VAST) Challenge from its origin in 2006 to present (2012). The VAST Challenge has provided an opportunity for visual analytics researchers to test their innovative thoughts on approaching problems in a wide range of subject domains against realistic datasets and problem scenarios. Over time, the Challenge has changed to correspond to the needs of researchers and users. We describe those changes and the impacts they have had on topics selected, data and questions offered, submissions received, and the Challenge format.","PeriodicalId":235801,"journal":{"name":"Workshop on Beyond Time and Errors: Novel Evaluation Methods for Visualization","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128305052","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
My position is that improving evaluation for visualization requires more than developing more sophisticated evaluation methods. It also requires improving the efficacy of evaluations, which involves issues such as how evaluations are applied, reported, and assessed. Considering the motivations for evaluation in visualization offers a way to explore these issues, but it requires us to develop a vocabulary for discussion. This paper proposes some initial terminology for discussing the motivations of evaluation. Specifically, the scales of actionability and persuasiveness can provide a framework for understanding the motivations of evaluation, and how these relate to the interests of various stakeholders in visualizations. It can help keep issues such as audience, reporting and assessment in focus as evaluation expands to new methods.
{"title":"Why ask why?: considering motivation in visualization evaluation","authors":"Michael Gleicher","doi":"10.1145/2442576.2442586","DOIUrl":"https://doi.org/10.1145/2442576.2442586","url":null,"abstract":"My position is that improving evaluation for visualization requires more than developing more sophisticated evaluation methods. It also requires improving the efficacy of evaluations, which involves issues such as how evaluations are applied, reported, and assessed. Considering the motivations for evaluation in visualization offers a way to explore these issues, but it requires us to develop a vocabulary for discussion. This paper proposes some initial terminology for discussing the motivations of evaluation. Specifically, the scales of actionability and persuasiveness can provide a framework for understanding the motivations of evaluation, and how these relate to the interests of various stakeholders in visualizations. It can help keep issues such as audience, reporting and assessment in focus as evaluation expands to new methods.","PeriodicalId":235801,"journal":{"name":"Workshop on Beyond Time and Errors: Novel Evaluation Methods for Visualization","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131071779","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Visualization research focuses either on the transformation steps necessary to create a visualization from data, or on the perception of structures after they have been shown on the screen. We argue that an end-to-end approach is necessary that tracks the data all the way through the required steps, and provides ways of measuring the impact of any of the transformations. By feeding that information back into the pipeline, visualization systems will be able to adapt the display to the data being shown, the parameters of the output device, and even the user.
{"title":"The importance of tracing data through the visualization pipeline","authors":"Aritra Dasgupta, Robert Kosara","doi":"10.1145/2442576.2442585","DOIUrl":"https://doi.org/10.1145/2442576.2442585","url":null,"abstract":"Visualization research focuses either on the transformation steps necessary to create a visualization from data, or on the perception of structures after they have been shown on the screen. We argue that an end-to-end approach is necessary that tracks the data all the way through the required steps, and provides ways of measuring the impact of any of the transformations. By feeding that information back into the pipeline, visualization systems will be able to adapt the display to the data being shown, the parameters of the output device, and even the user.","PeriodicalId":235801,"journal":{"name":"Workshop on Beyond Time and Errors: Novel Evaluation Methods for Visualization","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133251681","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose an extension to the four-level nested model of design and validation of visualization system that defines the term "guidelines" in terms of blocks at each level. Blocks are the outcomes of the design process at a specific level, and guidelines discuss relationships between these blocks. Within-level guidelines provide comparisons for blocks within the same level, while between-level guidelines provide mappings between adjacent levels of design. These guidelines help a designer choose which abstractions, techniques, and algorithms are reasonable to combine when building a visualization system. This definition of guideline allows analysis of how the validation efforts in different kinds of papers typically lead to different kinds of guidelines. Analysis through the lens of blocks and guidelines also led us to identify four major needs: a definition of the meaning of block at the problem level; mid-level task taxonomies to fill in the blocks at the abstraction level; refinement of the model itself at the abstraction level; and a more complete set of mappings up from the algorithm level to the technique level. These gaps in visualization knowledge present rich opportunities for future work.
{"title":"The four-level nested model revisited: blocks and guidelines","authors":"Miriah D. Meyer, M. Sedlmair, T. Munzner","doi":"10.1145/2442576.2442587","DOIUrl":"https://doi.org/10.1145/2442576.2442587","url":null,"abstract":"We propose an extension to the four-level nested model of design and validation of visualization system that defines the term \"guidelines\" in terms of blocks at each level. Blocks are the outcomes of the design process at a specific level, and guidelines discuss relationships between these blocks. Within-level guidelines provide comparisons for blocks within the same level, while between-level guidelines provide mappings between adjacent levels of design. These guidelines help a designer choose which abstractions, techniques, and algorithms are reasonable to combine when building a visualization system. This definition of guideline allows analysis of how the validation efforts in different kinds of papers typically lead to different kinds of guidelines. Analysis through the lens of blocks and guidelines also led us to identify four major needs: a definition of the meaning of block at the problem level; mid-level task taxonomies to fill in the blocks at the abstraction level; refinement of the model itself at the abstraction level; and a more complete set of mappings up from the algorithm level to the technique level. These gaps in visualization knowledge present rich opportunities for future work.","PeriodicalId":235801,"journal":{"name":"Workshop on Beyond Time and Errors: Novel Evaluation Methods for Visualization","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123347757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In recent years, using the number of insights to benchmark visual analytics tools has become a prominent method in the Infovis community. The insight methodology has become a frequently used instrument to measure the performance of tools that are developed for highly specialized purposes for highly specialized domain-experts. But some tools have a wider target group of experts with knowledge in different domains. The utility of the insight-method for other expert user groups without specific domain knowledge has been addressed to a far lesser extent. In a case study we give an illustration of how and where insights from experts with and without domain knowledge differ, and how these findings might enrich the evaluation of visualization tools designed for usage across different domains.
{"title":"Is your user hunting or gathering insights?: identifying insight drivers across domains","authors":"M. Smuc, E. Mayr, Hanna Risku","doi":"10.1145/2110192.2110200","DOIUrl":"https://doi.org/10.1145/2110192.2110200","url":null,"abstract":"In recent years, using the number of insights to benchmark visual analytics tools has become a prominent method in the Infovis community. The insight methodology has become a frequently used instrument to measure the performance of tools that are developed for highly specialized purposes for highly specialized domain-experts. But some tools have a wider target group of experts with knowledge in different domains. The utility of the insight-method for other expert user groups without specific domain knowledge has been addressed to a far lesser extent. In a case study we give an illustration of how and where insights from experts with and without domain knowledge differ, and how these findings might enrich the evaluation of visualization tools designed for usage across different domains.","PeriodicalId":235801,"journal":{"name":"Workshop on Beyond Time and Errors: Novel Evaluation Methods for Visualization","volume":"79 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121884982","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Online studies are an attractive alternative to the laborintensive lab study, and promise the possibility of reaching a larger variety and number of people than at a typical university. There are also a number of draw-backs, however, that have made these studies largely impractical so far. Amazon's Mechanical Turk is a web service that facilitates the assignment of small, web-based tasks to a large pool of anonymous workers. We used it to conduct several perception and cognition studies, one of which was identical to a previous study performed in our lab. We report on our experiences and present ways to avoid common problems by taking them into account in the study design, and taking advantage of Mechanical Turk's features.
{"title":"Do Mechanical Turks dream of square pie charts?","authors":"Robert Kosara, Caroline Ziemkiewicz","doi":"10.1145/2110192.2110202","DOIUrl":"https://doi.org/10.1145/2110192.2110202","url":null,"abstract":"Online studies are an attractive alternative to the laborintensive lab study, and promise the possibility of reaching a larger variety and number of people than at a typical university. There are also a number of draw-backs, however, that have made these studies largely impractical so far.\u0000 Amazon's Mechanical Turk is a web service that facilitates the assignment of small, web-based tasks to a large pool of anonymous workers. We used it to conduct several perception and cognition studies, one of which was identical to a previous study performed in our lab.\u0000 We report on our experiences and present ways to avoid common problems by taking them into account in the study design, and taking advantage of Mechanical Turk's features.","PeriodicalId":235801,"journal":{"name":"Workshop on Beyond Time and Errors: Novel Evaluation Methods for Visualization","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128679039","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
When designing a representation, a designer implicitly formulates a sequence of visual tasks required to understand and use the representation effectively. This paper aims to make the sequence of visual tasks explicit, in order to help designers eliciting their design choices. In particular, we present a set of concepts to systematically analyze what a user must theoretically do to decipher representation. The analysis consists of a decomposition of the activity of scanning into elementary visualization operations. We show how the analysis applies to various existing representations, and how expected benefits can be expressed in terms of elementary operations. The set of elementary operations form the basis of a shared, common language for representation designers. The decomposition highlights the challenges encountered by a user when deciphering a representation, and helps designers to exhibit possible flaws in their design, justify their choices, and compare designs.
{"title":"A descriptive model of visual scanning","authors":"Stéphane Conversy, C. Hurter, Stéphane Chatty","doi":"10.1145/2110192.2110198","DOIUrl":"https://doi.org/10.1145/2110192.2110198","url":null,"abstract":"When designing a representation, a designer implicitly formulates a sequence of visual tasks required to understand and use the representation effectively. This paper aims to make the sequence of visual tasks explicit, in order to help designers eliciting their design choices. In particular, we present a set of concepts to systematically analyze what a user must theoretically do to decipher representation. The analysis consists of a decomposition of the activity of scanning into elementary visualization operations. We show how the analysis applies to various existing representations, and how expected benefits can be expressed in terms of elementary operations. The set of elementary operations form the basis of a shared, common language for representation designers. The decomposition highlights the challenges encountered by a user when deciphering a representation, and helps designers to exhibit possible flaws in their design, justify their choices, and compare designs.","PeriodicalId":235801,"journal":{"name":"Workshop on Beyond Time and Errors: Novel Evaluation Methods for Visualization","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130575271","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Remco Chang, Caroline Ziemkiewicz, Roman Pyzh, Joseph Kielman, W. Ribarsky
Evaluation in visualization remains a difficult problem because of the unique constraints and opportunities inherent to visualization use. While many potentially useful methodologies have been proposed, there remain significant gaps in assessing the value of the open-ended exploration and complex task-solving that the visualization community holds up as an ideal. In this paper, we propose a methodology to quantitatively evaluate a visual analytics (VA) system based on measuring what is learned by its users as the users reapply the knowledge to a different problem or domain. The motivation for this methodology is based on the observation that the ultimate goal of a user of a VA system is to gain knowledge of and expertise with the dataset, task, or tool itself. We propose a framework for describing and measuring knowledge gain in the analytical process based on these three types of knowledge and discuss considerations for evaluating each. We propose that through careful design of tests that examine how well participants can reapply knowledge learned from using a VA system, the utility of the visualization can be more directly assessed.
{"title":"Learning-based evaluation of visual analytic systems","authors":"Remco Chang, Caroline Ziemkiewicz, Roman Pyzh, Joseph Kielman, W. Ribarsky","doi":"10.1145/2110192.2110197","DOIUrl":"https://doi.org/10.1145/2110192.2110197","url":null,"abstract":"Evaluation in visualization remains a difficult problem because of the unique constraints and opportunities inherent to visualization use. While many potentially useful methodologies have been proposed, there remain significant gaps in assessing the value of the open-ended exploration and complex task-solving that the visualization community holds up as an ideal. In this paper, we propose a methodology to quantitatively evaluate a visual analytics (VA) system based on measuring what is learned by its users as the users reapply the knowledge to a different problem or domain. The motivation for this methodology is based on the observation that the ultimate goal of a user of a VA system is to gain knowledge of and expertise with the dataset, task, or tool itself. We propose a framework for describing and measuring knowledge gain in the analytical process based on these three types of knowledge and discuss considerations for evaluating each. We propose that through careful design of tests that examine how well participants can reapply knowledge learned from using a VA system, the utility of the visualization can be more directly assessed.","PeriodicalId":235801,"journal":{"name":"Workshop on Beyond Time and Errors: Novel Evaluation Methods for Visualization","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132432883","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we examine reviews for the entries to the 2009 Visual Analytics Science and Technology (VAST) Symposium Challenge. By analyzing these reviews we gained a better understanding of what is important to our reviewers, both visualization researchers and professional analysts. This is a bottom-up approach to the development of heuristics to use in the evaluation of visual analytic environments. The meta-analysis and the results are presented in this paper.
{"title":"Developing qualitative metrics for visual analytic environments","authors":"J. Scholtz","doi":"10.1145/2110192.2110193","DOIUrl":"https://doi.org/10.1145/2110192.2110193","url":null,"abstract":"In this paper, we examine reviews for the entries to the 2009 Visual Analytics Science and Technology (VAST) Symposium Challenge. By analyzing these reviews we gained a better understanding of what is important to our reviewers, both visualization researchers and professional analysts. This is a bottom-up approach to the development of heuristics to use in the evaluation of visual analytic environments. The meta-analysis and the results are presented in this paper.","PeriodicalId":235801,"journal":{"name":"Workshop on Beyond Time and Errors: Novel Evaluation Methods for Visualization","volume":"110 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134487136","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We examine the process and some implications of evaluating information visualization in a large company setting. While several researchers have addressed the difficulties of evaluating information visualizations with regards to changing data, tasks, and visual encodings, considerably less work has been published on the difficulties of evaluation within specific work contexts. In this paper, we specifically focus on the challenges arising in the context of large companies with several thousand employees. We present a collection of evaluation challenges, discuss our own experiences conducting information visualization evaluation within the context of a large automotive company, and present a set of recommendations derived from our experiences. The set of challenges and recommendations can aid researchers and practitioners in preparing and conducting evaluations of their products within a large company setting.
{"title":"Evaluating information visualization in large companies: challenges, experiences and recommendations","authors":"M. Sedlmair, Petra Isenberg, D. Baur, A. Butz","doi":"10.1145/2110192.2110204","DOIUrl":"https://doi.org/10.1145/2110192.2110204","url":null,"abstract":"We examine the process and some implications of evaluating information visualization in a large company setting. While several researchers have addressed the difficulties of evaluating information visualizations with regards to changing data, tasks, and visual encodings, considerably less work has been published on the difficulties of evaluation within specific work contexts. In this paper, we specifically focus on the challenges arising in the context of large companies with several thousand employees. We present a collection of evaluation challenges, discuss our own experiences conducting information visualization evaluation within the context of a large automotive company, and present a set of recommendations derived from our experiences. The set of challenges and recommendations can aid researchers and practitioners in preparing and conducting evaluations of their products within a large company setting.","PeriodicalId":235801,"journal":{"name":"Workshop on Beyond Time and Errors: Novel Evaluation Methods for Visualization","volume":"95 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116645969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}