Pub Date : 2018-10-22DOI: 10.1109/MLUI52768.2018.10075649
Eli T. Brown, Sriram Yarlagadda, Kristin A. Cook, Remco Chang, A. Endert
User interactions with visualization systems have been shown to encode a great deal of information about the the users’ thinking processes, and analyzing their interaction trails can teach us more about the users, their approach, and how they arrived at insights. This deeper understanding is critical to improving their experience and outcomes, and there are tools available to visualize logs of interactions. It can be difficult to determine the structurally interesting parts of interaction data, though, like what set of button clicks constitutes an action that matters. In the case of visual analytics systems that use machine learning models, there is a convenient marker of when the user has significantly altered the state of the system via interaction: when the model is updated based on new information. We present a method for numerical analytic provenance using high-dimensional visualization to show and compare the trails of these sequences of model states of the system. We evaluate this approach with a prototype tool, ModelSpace, applied to two case studies on experimental data from model-steering visual analytics tools. ModelSpace reveals individual user’s progress, the relationships between their paths, and the characteristics of certain regions of the space of possible models.
{"title":"ModelSpace: Visualizing the Trails of Data Models in Visual Analytics Systems","authors":"Eli T. Brown, Sriram Yarlagadda, Kristin A. Cook, Remco Chang, A. Endert","doi":"10.1109/MLUI52768.2018.10075649","DOIUrl":"https://doi.org/10.1109/MLUI52768.2018.10075649","url":null,"abstract":"User interactions with visualization systems have been shown to encode a great deal of information about the the users’ thinking processes, and analyzing their interaction trails can teach us more about the users, their approach, and how they arrived at insights. This deeper understanding is critical to improving their experience and outcomes, and there are tools available to visualize logs of interactions. It can be difficult to determine the structurally interesting parts of interaction data, though, like what set of button clicks constitutes an action that matters. In the case of visual analytics systems that use machine learning models, there is a convenient marker of when the user has significantly altered the state of the system via interaction: when the model is updated based on new information. We present a method for numerical analytic provenance using high-dimensional visualization to show and compare the trails of these sequences of model states of the system. We evaluate this approach with a prototype tool, ModelSpace, applied to two case studies on experimental data from model-steering visual analytics tools. ModelSpace reveals individual user’s progress, the relationships between their paths, and the characteristics of certain regions of the space of possible models.","PeriodicalId":421877,"journal":{"name":"2018 IEEE Workshop on Machine Learning from User Interaction for Visualization and Analytics (MLUI)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126937700","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
While training a machine learning model, data scientists often need to determine some hyperparameters to set up the model. The values of hyperparameters configure the structure and other characteristics of the model and can significantly influence the training result. However, given the complexity of the model algorithms and the training processes, identifying a sweet spot in the hyperparameter space for a specific problem can be challenging. This paper characterizes user requirements for hyperparameter tuning and proposes a prototype system to provide model-agnostic support. We conducted interviews with data science practitioners in industry to collect user requirements and identify opportunities for leveraging interactive visual support. We present HyperTuner, a prototype system that supports hyperparameter search and analysis via interactive visual analytics. The design treats models as black boxes with the hyperparameters and data as inputs, and the predictions and performance metrics as outputs. We discuss our preliminary evaluation results, where the data science practitioners deem HyperTuner as useful and desired to help gain insights into the influence of hyperparameters on model performance and convergence. The design also triggered additional requirements such as involving more advanced support for automated tuning and debugging.
{"title":"HyperTuner: Visual Analytics for Hyperparameter Tuning by Professionals","authors":"Tianyi Li, G. Convertino, Wenbo Wang, Haley Most, Tristan Zajonc, Yi-Hsun Tsai","doi":"10.1109/MLUI52768.2018.10075647","DOIUrl":"https://doi.org/10.1109/MLUI52768.2018.10075647","url":null,"abstract":"While training a machine learning model, data scientists often need to determine some hyperparameters to set up the model. The values of hyperparameters configure the structure and other characteristics of the model and can significantly influence the training result. However, given the complexity of the model algorithms and the training processes, identifying a sweet spot in the hyperparameter space for a specific problem can be challenging. This paper characterizes user requirements for hyperparameter tuning and proposes a prototype system to provide model-agnostic support. We conducted interviews with data science practitioners in industry to collect user requirements and identify opportunities for leveraging interactive visual support. We present HyperTuner, a prototype system that supports hyperparameter search and analysis via interactive visual analytics. The design treats models as black boxes with the hyperparameters and data as inputs, and the predictions and performance metrics as outputs. We discuss our preliminary evaluation results, where the data science practitioners deem HyperTuner as useful and desired to help gain insights into the influence of hyperparameters on model performance and convergence. The design also triggered additional requirements such as involving more advanced support for automated tuning and debugging.","PeriodicalId":421877,"journal":{"name":"2018 IEEE Workshop on Machine Learning from User Interaction for Visualization and Analytics (MLUI)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117006186","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-10-22DOI: 10.1109/MLUI52768.2018.10075561
P. Panwar, A. Bradley, C. Collins
This paper proposes a method for helping users in visual analytic tasks by using machine learning to detect and respond to frustration and provide appropriate recommendations and guidance. We have collected an emotion dataset from 28 participants carrying out intentionally difficult visualization tasks and used it to build an interactive frustration state detection model which detects frustration using data streaming from a small wrist-worn skin conductance device and eye tracking. We present a work-in-progress design exploration for interventions appropriate to different intensities of frustrations detected by the model. The interaction method and the level of interruption and assistance can be adjusted in response to the intensity and longevity of detected user states.
{"title":"Providing Contextual Assistance in Response to Frustration in Visual Analytics Tasks","authors":"P. Panwar, A. Bradley, C. Collins","doi":"10.1109/MLUI52768.2018.10075561","DOIUrl":"https://doi.org/10.1109/MLUI52768.2018.10075561","url":null,"abstract":"This paper proposes a method for helping users in visual analytic tasks by using machine learning to detect and respond to frustration and provide appropriate recommendations and guidance. We have collected an emotion dataset from 28 participants carrying out intentionally difficult visualization tasks and used it to build an interactive frustration state detection model which detects frustration using data streaming from a small wrist-worn skin conductance device and eye tracking. We present a work-in-progress design exploration for interventions appropriate to different intensities of frustrations detected by the model. The interaction method and the level of interruption and assistance can be adjusted in response to the intensity and longevity of detected user states.","PeriodicalId":421877,"journal":{"name":"2018 IEEE Workshop on Machine Learning from User Interaction for Visualization and Analytics (MLUI)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127073413","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-10-22DOI: 10.1109/MLUI52768.2018.10075650
Fang Cao, David J. Scroggins, Lebna V. Thomas, Eli T. Brown
Human-in-the-Loop (HIL) analytics systems blend the intuitive sensemaking abilities of humans with the raw number-crunching capability of machine learning. The web and front-end visualization libraries, such as D3.js, make it easier than ever to develop cross-platform HIL systems for wide distribution. Analytics toolkits such as scikit-learn provide straightforward, coherent interfaces for a variety of machine learning algorithms. However, creating novel HIL systems requires expertise in a range of skills including data visualization, web engineering, and machine learning. The Library for Interactive Human-Computer Analytics (LIHCA) is a platform to simplify creating applications that use interactive visualizations to steer back-end machine learners. Developers can enhance their interactive visualizations by connecting to a LIHCA API back end that manages data, runs machine learning algorithms, and returns the results in a visualization-convenient format. We provide a discussion of design considerations for HIL systems, an implementation of LIHCA to satisfy those considerations, and a set of implemented examples to illustrate the usage of the library.
{"title":"A Human-in-the-Loop Software Platform","authors":"Fang Cao, David J. Scroggins, Lebna V. Thomas, Eli T. Brown","doi":"10.1109/MLUI52768.2018.10075650","DOIUrl":"https://doi.org/10.1109/MLUI52768.2018.10075650","url":null,"abstract":"Human-in-the-Loop (HIL) analytics systems blend the intuitive sensemaking abilities of humans with the raw number-crunching capability of machine learning. The web and front-end visualization libraries, such as D3.js, make it easier than ever to develop cross-platform HIL systems for wide distribution. Analytics toolkits such as scikit-learn provide straightforward, coherent interfaces for a variety of machine learning algorithms. However, creating novel HIL systems requires expertise in a range of skills including data visualization, web engineering, and machine learning. The Library for Interactive Human-Computer Analytics (LIHCA) is a platform to simplify creating applications that use interactive visualizations to steer back-end machine learners. Developers can enhance their interactive visualizations by connecting to a LIHCA API back end that manages data, runs machine learning algorithms, and returns the results in a visualization-convenient format. We provide a discussion of design considerations for HIL systems, an implementation of LIHCA to satisfy those considerations, and a set of implemented examples to illustrate the usage of the library.","PeriodicalId":421877,"journal":{"name":"2018 IEEE Workshop on Machine Learning from User Interaction for Visualization and Analytics (MLUI)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116935267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-10-22DOI: 10.1109/MLUI52768.2018.10075559
F. Sperrle, J. Bernard, M. Sedlmair, D. Keim, Mennatallah El-Assady
We propose the concept of Speculative Execution for Visual Analytics and discuss its effectiveness for model exploration and optimization. Speculative Execution enables the automatic generation of alternative, competing model configurations that do not alter the current model state unless explicitly confirmed by the user. These alternatives are computed based on either user interactions or model quality measures and can be explored using delta-visualizations. By automatically proposing modeling alternatives, systems employing Speculative Execution can shorten the gap between users and models, reduce the confirmation bias and speed up optimization processes. In this paper, we have assembled five application scenarios showcasing the potential of Speculative Execution, as well as a potential for further research.
{"title":"Speculative Execution for Guided Visual Analytics","authors":"F. Sperrle, J. Bernard, M. Sedlmair, D. Keim, Mennatallah El-Assady","doi":"10.1109/MLUI52768.2018.10075559","DOIUrl":"https://doi.org/10.1109/MLUI52768.2018.10075559","url":null,"abstract":"We propose the concept of Speculative Execution for Visual Analytics and discuss its effectiveness for model exploration and optimization. Speculative Execution enables the automatic generation of alternative, competing model configurations that do not alter the current model state unless explicitly confirmed by the user. These alternatives are computed based on either user interactions or model quality measures and can be explored using delta-visualizations. By automatically proposing modeling alternatives, systems employing Speculative Execution can shorten the gap between users and models, reduce the confirmation bias and speed up optimization processes. In this paper, we have assembled five application scenarios showcasing the potential of Speculative Execution, as well as a potential for further research.","PeriodicalId":421877,"journal":{"name":"2018 IEEE Workshop on Machine Learning from User Interaction for Visualization and Analytics (MLUI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129009005","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-10-22DOI: 10.1109/MLUI52768.2018.10075648
M. Aboufoul, Ryan Wesslen, Isaac Cho, Wenwen Dou, Samira Shaikh
Many visual analytics tools exist to assist users in examining large amounts of information at once via coordinated views that include graphs, network connections and maps. However, the cognitive processes that those users undergo while using such tools remain a mystery. Many psychological studies suggest that individuals may undergo some planning stage followed by analysis before finally making conclusions when examining large amounts of analytical data with the goal of reaching a decision. While the general order of these cognitive states has been theorized, the exact states of individuals at specific points during their interaction with visual analytic systems remain unclear. In this work, we developed models to determine the cognitive states of users based solely on their interactions with visual analytics systems via Hidden Markov Models. Hidden Markov Models allow for the classification of observations through hidden states (cognitive states in our case) as well as the prediction of future cognitive states. We generate these models through unsupervised learning and use established metrics such as AIC and BIC metrics to evaluate our models. Our solutions are designed to help improve visual analytics tools by providing a better understanding of cognitive thought processes of users during data intensive analysis tasks.
{"title":"Using Hidden Markov Models to Determine Cognitive States of Visual Analytic Users","authors":"M. Aboufoul, Ryan Wesslen, Isaac Cho, Wenwen Dou, Samira Shaikh","doi":"10.1109/MLUI52768.2018.10075648","DOIUrl":"https://doi.org/10.1109/MLUI52768.2018.10075648","url":null,"abstract":"Many visual analytics tools exist to assist users in examining large amounts of information at once via coordinated views that include graphs, network connections and maps. However, the cognitive processes that those users undergo while using such tools remain a mystery. Many psychological studies suggest that individuals may undergo some planning stage followed by analysis before finally making conclusions when examining large amounts of analytical data with the goal of reaching a decision. While the general order of these cognitive states has been theorized, the exact states of individuals at specific points during their interaction with visual analytic systems remain unclear. In this work, we developed models to determine the cognitive states of users based solely on their interactions with visual analytics systems via Hidden Markov Models. Hidden Markov Models allow for the classification of observations through hidden states (cognitive states in our case) as well as the prediction of future cognitive states. We generate these models through unsupervised learning and use established metrics such as AIC and BIC metrics to evaluate our models. Our solutions are designed to help improve visual analytics tools by providing a better understanding of cognitive thought processes of users during data intensive analysis tasks.","PeriodicalId":421877,"journal":{"name":"2018 IEEE Workshop on Machine Learning from User Interaction for Visualization and Analytics (MLUI)","volume":"124 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128173885","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-10-22DOI: 10.1109/MLUI52768.2018.10075564
S. Agarwal, Fabian Beck
A curated literature collection on a specific topic helps researchers to find relevant articles quickly. Assigning multiple keywords to each article is one of the techniques to structure such a collection. But it is challenging to assign all the keywords consistently without any gaps or ambiguities. We propose to support the user with a machine learning technique that suggests keywords for articles in a literature collection browser. We provide visual explanations to make the keyword suggestions transparent. The suggestions are based on previous keyword assignments. The machine learning technique learns on the fly from the interactive assignments of the user. We seamlessly integrate the proposed technique in an existing literature collection browser and investigate various usage scenarios through an early prototype.
{"title":"Computer-supported Interactive Assignment of Keywords for Literature Collections","authors":"S. Agarwal, Fabian Beck","doi":"10.1109/MLUI52768.2018.10075564","DOIUrl":"https://doi.org/10.1109/MLUI52768.2018.10075564","url":null,"abstract":"A curated literature collection on a specific topic helps researchers to find relevant articles quickly. Assigning multiple keywords to each article is one of the techniques to structure such a collection. But it is challenging to assign all the keywords consistently without any gaps or ambiguities. We propose to support the user with a machine learning technique that suggests keywords for articles in a literature collection browser. We provide visual explanations to make the keyword suggestions transparent. The suggestions are based on previous keyword assignments. The machine learning technique learns on the fly from the interactive assignments of the user. We seamlessly integrate the proposed technique in an existing literature collection browser and investigate various usage scenarios through an early prototype.","PeriodicalId":421877,"journal":{"name":"2018 IEEE Workshop on Machine Learning from User Interaction for Visualization and Analytics (MLUI)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127910699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-10-22DOI: 10.1109/MLUI52768.2018.10075562
Michelle Dowling, John E. Wenskovitch, P. Hauck, A. Binford, Nicholas F. Polys, Chris North
Semantic interaction techniques in visual analytics tools allow analysts to indirectly adjust model parameters by directly manipulating the visual output of the models. Many existing tools that support semantic interaction do so with a number of similar features, including using a set of mathematical models that are composed within a pipeline, having a semantic interaction be interpreted by an inverse computation of one or more mathematical models, and using an underlying bidirectional structure within the pipeline. We propose a new visual analytics pipeline that captures these necessary features of semantic interactions. To demonstrate how this pipeline can be used, we represent existing visual analytics tools and their semantic interactions within this pipeline. We also explore a series of new visual analytics tools with semantic interaction to highlight how the new pipeline can represent new research as well.
{"title":"A Bidirectional Pipeline for Semantic Interaction","authors":"Michelle Dowling, John E. Wenskovitch, P. Hauck, A. Binford, Nicholas F. Polys, Chris North","doi":"10.1109/MLUI52768.2018.10075562","DOIUrl":"https://doi.org/10.1109/MLUI52768.2018.10075562","url":null,"abstract":"Semantic interaction techniques in visual analytics tools allow analysts to indirectly adjust model parameters by directly manipulating the visual output of the models. Many existing tools that support semantic interaction do so with a number of similar features, including using a set of mathematical models that are composed within a pipeline, having a semantic interaction be interpreted by an inverse computation of one or more mathematical models, and using an underlying bidirectional structure within the pipeline. We propose a new visual analytics pipeline that captures these necessary features of semantic interactions. To demonstrate how this pipeline can be used, we represent existing visual analytics tools and their semantic interactions within this pipeline. We also explore a series of new visual analytics tools with semantic interaction to highlight how the new pipeline can represent new research as well.","PeriodicalId":421877,"journal":{"name":"2018 IEEE Workshop on Machine Learning from User Interaction for Visualization and Analytics (MLUI)","volume":"89 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126464223","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}