Pub Date : 2018-10-01DOI: 10.1109/BELIV.2018.8634201
Hendrik Lücke-Tieke, Marcel Beuth, Philipp Schader, T. May, J. Bernard, J. Kohlhammer
Evaluation of a visualization technique is complex and time-consuming. We present a system that aims at easing design, creation and execution of controlled experiments for visualizations in the web. We include of parameterizable visualization generation services, thus separating the visualization implementation from study design and execution. This enables experimenters to design and run multiple experiments on the same visualization service in parallel, replicate experiments, and compare different visualization services quickly. The system supports the range from simple questionnaires to visualization-specific interaction techniques as well as automated task generation based on dynamic sampling of parameter spaces. We feature two examples to demonstrate our service-based approach. One example demonstrates how a suite of successive experiments can be conducted, while the other example includes an extended replication study.
{"title":"Lowering the Barrier for Successful Replication and Evaluation","authors":"Hendrik Lücke-Tieke, Marcel Beuth, Philipp Schader, T. May, J. Bernard, J. Kohlhammer","doi":"10.1109/BELIV.2018.8634201","DOIUrl":"https://doi.org/10.1109/BELIV.2018.8634201","url":null,"abstract":"Evaluation of a visualization technique is complex and time-consuming. We present a system that aims at easing design, creation and execution of controlled experiments for visualizations in the web. We include of parameterizable visualization generation services, thus separating the visualization implementation from study design and execution. This enables experimenters to design and run multiple experiments on the same visualization service in parallel, replicate experiments, and compare different visualization services quickly. The system supports the range from simple questionnaires to visualization-specific interaction techniques as well as automated task generation based on dynamic sampling of parameter spaces. We feature two examples to demonstrate our service-based approach. One example demonstrates how a suite of successive experiments can be conducted, while the other example includes an extended replication study.","PeriodicalId":235801,"journal":{"name":"Workshop on Beyond Time and Errors: Novel Evaluation Methods for Visualization","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130536930","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Design studies are projects in which visualization researchers seek to design visualization tools that help solving challenging real-world problems faced by domain experts. While design studies have become a vital component of visualization research, reflecting on actionable contributions from them often poses challenges. The goal of this paper is to better characterize different contributions that can result from design study projects. Towards this goal, a set of seven guiding scenarios for characterizing design study contributions is proposed. The scenarios are meant to help authors identify and depict design study contributions that are interesting and actionable for other visualization researchers. They are also meant to provide better guidance in evaluating design study contributions in the reviewing process.
{"title":"Design Study Contributions Come in Different Guises: Seven Guiding Scenarios","authors":"M. Sedlmair","doi":"10.1145/2993901.2993913","DOIUrl":"https://doi.org/10.1145/2993901.2993913","url":null,"abstract":"Design studies are projects in which visualization researchers seek to design visualization tools that help solving challenging real-world problems faced by domain experts. While design studies have become a vital component of visualization research, reflecting on actionable contributions from them often poses challenges. The goal of this paper is to better characterize different contributions that can result from design study projects. Towards this goal, a set of seven guiding scenarios for characterizing design study contributions is proposed. The scenarios are meant to help authors identify and depict design study contributions that are interesting and actionable for other visualization researchers. They are also meant to provide better guidance in evaluating design study contributions in the reviewing process.","PeriodicalId":235801,"journal":{"name":"Workshop on Beyond Time and Errors: Novel Evaluation Methods for Visualization","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126362425","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kerstin Blumenstein, C. Niederer, Markus Wagner, Grischa Schmiedl, A. Rind, W. Aigner
With their increasingly widespread use, mobile devices have become a highly relevant target environment for Information Visualization. However, far too little attention has been paid to evaluation of interactive visualization techniques on mobile devices. To fill this gap, this paper provides a structured overview of the commonly used evaluation approaches for mobile visualization. For this, it systematically reviews the scientific literature of major InfoVis and HCI venues and categorizes the relevant work based on six dimensions circumscribing the design and evaluation space for visualization on mobile devices. Based on the 21 evaluations reviewed, reproducibility, device variety and usage environment surface as the three main issues in evaluation of information visualization on mobile devices. To overcome these issues, we argue for a transparent description of all research aspects and propose to focus more on context of usage and technology.
{"title":"Evaluating Information Visualization on Mobile Devices: Gaps and Challenges in the Empirical Evaluation Design Space","authors":"Kerstin Blumenstein, C. Niederer, Markus Wagner, Grischa Schmiedl, A. Rind, W. Aigner","doi":"10.1145/2993901.2993906","DOIUrl":"https://doi.org/10.1145/2993901.2993906","url":null,"abstract":"With their increasingly widespread use, mobile devices have become a highly relevant target environment for Information Visualization. However, far too little attention has been paid to evaluation of interactive visualization techniques on mobile devices. To fill this gap, this paper provides a structured overview of the commonly used evaluation approaches for mobile visualization. For this, it systematically reviews the scientific literature of major InfoVis and HCI venues and categorizes the relevant work based on six dimensions circumscribing the design and evaluation space for visualization on mobile devices. Based on the 21 evaluations reviewed, reproducibility, device variety and usage environment surface as the three main issues in evaluation of information visualization on mobile devices. To overcome these issues, we argue for a transparent description of all research aspects and propose to focus more on context of usage and technology.","PeriodicalId":235801,"journal":{"name":"Workshop on Beyond Time and Errors: Novel Evaluation Methods for Visualization","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114513891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Characterizing the problem domain and understanding users' practices and processes are recognized as important steps in order to design and validate visualization, but are often disregarded in practice, also because of their complexity. We introduce the nested workflow model for design and validation of visual analytics, aimed at providing designers with a powerful and expressive modelling tool. This model enables the description of visual analytics processes, at different design levels, in terms of tasks, data, and users, including complex workflow patterns, data and knowledge flows, and collaboration between users. We discuss its application to two visual analytics projects, demonstrating its usefulness for their design and validation.
{"title":"A Nested Workflow Model for Visual Analytics Design and Validation","authors":"P. Federico, Albert Amor-Amoros, S. Miksch","doi":"10.1145/2993901.2993915","DOIUrl":"https://doi.org/10.1145/2993901.2993915","url":null,"abstract":"Characterizing the problem domain and understanding users' practices and processes are recognized as important steps in order to design and validate visualization, but are often disregarded in practice, also because of their complexity. We introduce the nested workflow model for design and validation of visual analytics, aimed at providing designers with a powerful and expressive modelling tool. This model enables the description of visual analytics processes, at different design levels, in terms of tasks, data, and users, including complex workflow patterns, data and knowledge flows, and collaboration between users. We discuss its application to two visual analytics projects, demonstrating its usefulness for their design and validation.","PeriodicalId":235801,"journal":{"name":"Workshop on Beyond Time and Errors: Novel Evaluation Methods for Visualization","volume":"102 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114049455","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Data exploration requires forming analysis goals, planning actions and evaluating results effectively, all of which are complex cognitive activities. Therefore, the data exploration and analysis process can be improved through a principled and comprehensive approach to analyzing the cognitive activities of the user given a data exploration tool. However, many taxonomies and evaluations focus on a specific tool or specific design guides instead of cognitive activities comprehensively. In this paper, we first present the Cognitive Exploration Framework that identifies six stages of cognitive activities in visual data exploration. These stages are a combination of two activities---planning and assessing---across data analysis, interaction, and visualization. Cognitive barriers in each stage can lower the success and speed of data exploration. The framework also identifies the factors of decision-making, existing knowledge and motivation that influence cognitive activities. We argue that cognitive stages can be supported by improving the design of tools rather than their computing capabilities. We demonstrate how the framework clarifies the structured relationship between design guides to specific cognitive stages. In particular, the framework can also be used to guide evaluation of data exploration tools. To reveal cognitive barriers in each stage, we focused on the failures instead of success stories, and on motivating self-driven open-ended exploration instead of using benchmarked tasks on fixed datasets. With these goals, we studied short-term casual use of an exploratory tool by novices with limited training. Our results reveal cognitive barriers across all stages. We also discuss directions for future research and applications.
{"title":"Cognitive Stages in Visual Data Exploration","authors":"M. A. Yalçın, N. Elmqvist, B. Bederson","doi":"10.1145/2993901.2993902","DOIUrl":"https://doi.org/10.1145/2993901.2993902","url":null,"abstract":"Data exploration requires forming analysis goals, planning actions and evaluating results effectively, all of which are complex cognitive activities. Therefore, the data exploration and analysis process can be improved through a principled and comprehensive approach to analyzing the cognitive activities of the user given a data exploration tool. However, many taxonomies and evaluations focus on a specific tool or specific design guides instead of cognitive activities comprehensively. In this paper, we first present the Cognitive Exploration Framework that identifies six stages of cognitive activities in visual data exploration. These stages are a combination of two activities---planning and assessing---across data analysis, interaction, and visualization. Cognitive barriers in each stage can lower the success and speed of data exploration. The framework also identifies the factors of decision-making, existing knowledge and motivation that influence cognitive activities. We argue that cognitive stages can be supported by improving the design of tools rather than their computing capabilities. We demonstrate how the framework clarifies the structured relationship between design guides to specific cognitive stages. In particular, the framework can also be used to guide evaluation of data exploration tools. To reveal cognitive barriers in each stage, we focused on the failures instead of success stories, and on motivating self-driven open-ended exploration instead of using benchmarked tasks on fixed datasets. With these goals, we studied short-term casual use of an exploratory tool by novices with limited training. Our results reveal cognitive barriers across all stages. We also discuss directions for future research and applications.","PeriodicalId":235801,"journal":{"name":"Workshop on Beyond Time and Errors: Novel Evaluation Methods for Visualization","volume":"120 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114256299","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Traditionally, studies of data visualization techniques and systems have evaluated visualizations with respect to usability goals such as effectiveness and efficiency. These studies assess performance-related metrics such as time and correctness of participants completing analytic tasks. Alternatively, several studies in InfoVis recently have evaluated visualizations by investigating user experience goals such as memorability, engagement, enjoyment and fun. These studies employ somewhat different evaluation methodologies to assess these other goals. The growing number of these studies, their alternative methodologies, and disagreements concerning their importance have motivated us to more carefully examine them. In this article, we review this growing collection of visualization evaluations that examine user experience goals and we discuss multiple issues regarding the studies including questions about their motivation and utility. Our aim is to provide a resource for future work that plans to evaluate visualizations using these goals.
{"title":"Beyond Usability and Performance: A Review of User Experience-focused Evaluations in Visualization","authors":"B. Saket, A. Endert, J. Stasko","doi":"10.1145/2993901.2993903","DOIUrl":"https://doi.org/10.1145/2993901.2993903","url":null,"abstract":"Traditionally, studies of data visualization techniques and systems have evaluated visualizations with respect to usability goals such as effectiveness and efficiency. These studies assess performance-related metrics such as time and correctness of participants completing analytic tasks. Alternatively, several studies in InfoVis recently have evaluated visualizations by investigating user experience goals such as memorability, engagement, enjoyment and fun. These studies employ somewhat different evaluation methodologies to assess these other goals. The growing number of these studies, their alternative methodologies, and disagreements concerning their importance have motivated us to more carefully examine them. In this article, we review this growing collection of visualization evaluations that examine user experience goals and we discuss multiple issues regarding the studies including questions about their motivation and utility. Our aim is to provide a resource for future work that plans to evaluate visualizations using these goals.","PeriodicalId":235801,"journal":{"name":"Workshop on Beyond Time and Errors: Novel Evaluation Methods for Visualization","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127620013","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Evaluating a visualization that depicts uncertainty is fraught with challenges due to the complex psychology of uncertainty. However, relatively little attention is paid to selecting and motivating a chosen interpretation or elicitation method for subjective probabilities in the uncertainty visualization literature. I survey existing evaluation work in uncertainty visualization, and examine how research in judgment and decision-making that focuses on subjective uncertainty elicitation sheds light on common approaches in visualization. I propose suggestions for practice aimed at reducing errors and noise related to how ground truth is defined for subjective probability estimates, the choice of an elicitation method, and the strategies used by subjects making judgments with an uncertainty visualization.
{"title":"Why Evaluating Uncertainty Visualization is Error Prone","authors":"J. Hullman","doi":"10.1145/2993901.2993919","DOIUrl":"https://doi.org/10.1145/2993901.2993919","url":null,"abstract":"Evaluating a visualization that depicts uncertainty is fraught with challenges due to the complex psychology of uncertainty. However, relatively little attention is paid to selecting and motivating a chosen interpretation or elicitation method for subjective probabilities in the uncertainty visualization literature. I survey existing evaluation work in uncertainty visualization, and examine how research in judgment and decision-making that focuses on subjective uncertainty elicitation sheds light on common approaches in visualization. I propose suggestions for practice aimed at reducing errors and noise related to how ground truth is defined for subjective probability estimates, the choice of an elicitation method, and the strategies used by subjects making judgments with an uncertainty visualization.","PeriodicalId":235801,"journal":{"name":"Workshop on Beyond Time and Errors: Novel Evaluation Methods for Visualization","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133192772","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Laura A. McNamara, Travis L. Bauer, Michael J. Haass, Laura E. Matzen
In this paper, we argue that information theoretic measures may provide a robust, broadly applicable, repeatable metric to assess how a system enables people to reduce high-dimensional data into topically relevant subsets of information. Explosive growth in electronic data necessitates the development of systems that balance automation with human cognitive engagement to facilitate pattern discovery, analysis and characterization, variously described as "cognitive augmentation" or "insight generation." However, operationalizing the concept of insight in any measurable way remains a difficult challenge for visualization researchers. The "golden ticket" of insight evaluation would be a precise, generalizable, repeatable, and ecologically valid metric that indicates the relative utility of a system in heightening cognitive performance or facilitating insights. Unfortunately, the golden ticket does not yet exist. In its place, we are exploring information theoretic measures derived from Shannon's ideas about information and entropy as a starting point for precise, repeatable, and generalizable approaches for evaluating analytic tools. We are specifically concerned with needle-in-haystack workflows that require interactive search, classification, and reduction of very large heterogeneous datasets into manageable, task-relevant subsets of information. We assert that systems aimed at facilitating pattern discovery, characterization and analysis -- i.e., "insight" - must afford an efficient means of sorting the needles from the chaff; and simple compressibility measures provide a way of tracking changes in information content as people shape meaning from data.
{"title":"Information Theoretic Measures for Visual Analytics: The Silver Ticket?","authors":"Laura A. McNamara, Travis L. Bauer, Michael J. Haass, Laura E. Matzen","doi":"10.1145/2993901.2993920","DOIUrl":"https://doi.org/10.1145/2993901.2993920","url":null,"abstract":"In this paper, we argue that information theoretic measures may provide a robust, broadly applicable, repeatable metric to assess how a system enables people to reduce high-dimensional data into topically relevant subsets of information. Explosive growth in electronic data necessitates the development of systems that balance automation with human cognitive engagement to facilitate pattern discovery, analysis and characterization, variously described as \"cognitive augmentation\" or \"insight generation.\" However, operationalizing the concept of insight in any measurable way remains a difficult challenge for visualization researchers. The \"golden ticket\" of insight evaluation would be a precise, generalizable, repeatable, and ecologically valid metric that indicates the relative utility of a system in heightening cognitive performance or facilitating insights. Unfortunately, the golden ticket does not yet exist. In its place, we are exploring information theoretic measures derived from Shannon's ideas about information and entropy as a starting point for precise, repeatable, and generalizable approaches for evaluating analytic tools. We are specifically concerned with needle-in-haystack workflows that require interactive search, classification, and reduction of very large heterogeneous datasets into manageable, task-relevant subsets of information. We assert that systems aimed at facilitating pattern discovery, characterization and analysis -- i.e., \"insight\" - must afford an efficient means of sorting the needles from the chaff; and simple compressibility measures provide a way of tracking changes in information content as people shape meaning from data.","PeriodicalId":235801,"journal":{"name":"Workshop on Beyond Time and Errors: Novel Evaluation Methods for Visualization","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132576981","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Knudsen, Jeppe Gerner Pedersen, Thor Herdal, J. E. Larsen
We explore means of designing and evaluating initial visualization ideas, with concrete and realistic data in cases where data is not readily available. Our approach is useful in exploring new domains and avenues for visualization, and contrasts other visualization work, which typically operate under the assumption that data has already been collected, and is ready to be visualized. We argue that it is sensible to understand data requirements and evaluate the potential value of visualization before devising means of automatic data collection. We base our exploration on three cases selected to span a range of factors, such as the role of the person doing the data collection and the type of instrumentation used. The three cases relate to visualizing sports, construction, and cooking domain data, and use primarily time-domain data and visualizations. For each case, we briefly describe the design case and problem, the manner in which we collected data, and the findings obtained from evaluations. Afterwards, we describe four factors of our data collection approach, and discuss potential outcomes from it.
{"title":"Using Concrete and Realistic Data in Evaluating Initial Visualization Designs","authors":"S. Knudsen, Jeppe Gerner Pedersen, Thor Herdal, J. E. Larsen","doi":"10.1145/2993901.2993917","DOIUrl":"https://doi.org/10.1145/2993901.2993917","url":null,"abstract":"We explore means of designing and evaluating initial visualization ideas, with concrete and realistic data in cases where data is not readily available. Our approach is useful in exploring new domains and avenues for visualization, and contrasts other visualization work, which typically operate under the assumption that data has already been collected, and is ready to be visualized. We argue that it is sensible to understand data requirements and evaluate the potential value of visualization before devising means of automatic data collection. We base our exploration on three cases selected to span a range of factors, such as the role of the person doing the data collection and the type of instrumentation used. The three cases relate to visualizing sports, construction, and cooking domain data, and use primarily time-domain data and visualizations. For each case, we briefly describe the design case and problem, the manner in which we collected data, and the findings obtained from evaluations. Afterwards, we describe four factors of our data collection approach, and discuss potential outcomes from it.","PeriodicalId":235801,"journal":{"name":"Workshop on Beyond Time and Errors: Novel Evaluation Methods for Visualization","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132401932","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The trend of exploratory visualization development has driven the visual analytics (VA) community to design special evaluation methods. The main goals of these evaluations are to understand the exploration process and improve it by recording users' interactions and thoughts. Some of the recent works have focused on performing manual evaluations of the interaction logs, however, lately some researchers have taken the step towards automating the process using interaction logs. In this paper we show the capability of how interaction log analysis can be automated by summarizing previous works' steps into building blocks. In addition, we demonstrate the use of each building block by showing their methodologies as use case scenarios, such as how to encode and segment interactions and what machine learning algorithms can automate the process. We also link the studies reviewed with sensemaking aspects and interaction taxonomies selection.
{"title":"A Survey on Interaction Log Analysis for Evaluating Exploratory Visualizations","authors":"Omar Eltayeby, Wenwen Dou","doi":"10.1145/2993901.2993912","DOIUrl":"https://doi.org/10.1145/2993901.2993912","url":null,"abstract":"The trend of exploratory visualization development has driven the visual analytics (VA) community to design special evaluation methods. The main goals of these evaluations are to understand the exploration process and improve it by recording users' interactions and thoughts. Some of the recent works have focused on performing manual evaluations of the interaction logs, however, lately some researchers have taken the step towards automating the process using interaction logs. In this paper we show the capability of how interaction log analysis can be automated by summarizing previous works' steps into building blocks. In addition, we demonstrate the use of each building block by showing their methodologies as use case scenarios, such as how to encode and segment interactions and what machine learning algorithms can automate the process. We also link the studies reviewed with sensemaking aspects and interaction taxonomies selection.","PeriodicalId":235801,"journal":{"name":"Workshop on Beyond Time and Errors: Novel Evaluation Methods for Visualization","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121832947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}