首页 > 最新文献

2018 IEEE Workshop on Machine Learning from User Interaction for Visualization and Analytics (MLUI)最新文献

英文 中文
ModelSpace: Visualizing the Trails of Data Models in Visual Analytics Systems ModelSpace:可视化分析系统中数据模型的轨迹
Eli T. Brown, Sriram Yarlagadda, Kristin A. Cook, Remco Chang, A. Endert
User interactions with visualization systems have been shown to encode a great deal of information about the the users’ thinking processes, and analyzing their interaction trails can teach us more about the users, their approach, and how they arrived at insights. This deeper understanding is critical to improving their experience and outcomes, and there are tools available to visualize logs of interactions. It can be difficult to determine the structurally interesting parts of interaction data, though, like what set of button clicks constitutes an action that matters. In the case of visual analytics systems that use machine learning models, there is a convenient marker of when the user has significantly altered the state of the system via interaction: when the model is updated based on new information. We present a method for numerical analytic provenance using high-dimensional visualization to show and compare the trails of these sequences of model states of the system. We evaluate this approach with a prototype tool, ModelSpace, applied to two case studies on experimental data from model-steering visual analytics tools. ModelSpace reveals individual user’s progress, the relationships between their paths, and the characteristics of certain regions of the space of possible models.
用户与可视化系统的交互已经被证明编码了大量关于用户思维过程的信息,分析他们的交互轨迹可以让我们更多地了解用户、他们的方法以及他们如何获得洞察力。这种更深入的理解对于改善他们的体验和结果至关重要,并且有一些工具可以将交互日志可视化。但是,很难确定交互数据中结构上有趣的部分,比如哪一组按钮点击构成了重要的操作。在使用机器学习模型的视觉分析系统中,当用户通过交互显著改变系统状态时,有一个方便的标记:当模型基于新信息更新时。我们提出了一种使用高维可视化来显示和比较这些系统模型状态序列的数值分析溯源方法。我们使用原型工具ModelSpace来评估这种方法,并将其应用于两个案例研究,这些案例研究来自模型导向可视化分析工具的实验数据。ModelSpace揭示了单个用户的进程,他们的路径之间的关系,以及可能模型空间的某些区域的特征。
{"title":"ModelSpace: Visualizing the Trails of Data Models in Visual Analytics Systems","authors":"Eli T. Brown, Sriram Yarlagadda, Kristin A. Cook, Remco Chang, A. Endert","doi":"10.1109/MLUI52768.2018.10075649","DOIUrl":"https://doi.org/10.1109/MLUI52768.2018.10075649","url":null,"abstract":"User interactions with visualization systems have been shown to encode a great deal of information about the the users’ thinking processes, and analyzing their interaction trails can teach us more about the users, their approach, and how they arrived at insights. This deeper understanding is critical to improving their experience and outcomes, and there are tools available to visualize logs of interactions. It can be difficult to determine the structurally interesting parts of interaction data, though, like what set of button clicks constitutes an action that matters. In the case of visual analytics systems that use machine learning models, there is a convenient marker of when the user has significantly altered the state of the system via interaction: when the model is updated based on new information. We present a method for numerical analytic provenance using high-dimensional visualization to show and compare the trails of these sequences of model states of the system. We evaluate this approach with a prototype tool, ModelSpace, applied to two case studies on experimental data from model-steering visual analytics tools. ModelSpace reveals individual user’s progress, the relationships between their paths, and the characteristics of certain regions of the space of possible models.","PeriodicalId":421877,"journal":{"name":"2018 IEEE Workshop on Machine Learning from User Interaction for Visualization and Analytics (MLUI)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126937700","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
HyperTuner: Visual Analytics for Hyperparameter Tuning by Professionals HyperTuner:由专业人员进行超参数调优的可视化分析
Tianyi Li, G. Convertino, Wenbo Wang, Haley Most, Tristan Zajonc, Yi-Hsun Tsai
While training a machine learning model, data scientists often need to determine some hyperparameters to set up the model. The values of hyperparameters configure the structure and other characteristics of the model and can significantly influence the training result. However, given the complexity of the model algorithms and the training processes, identifying a sweet spot in the hyperparameter space for a specific problem can be challenging. This paper characterizes user requirements for hyperparameter tuning and proposes a prototype system to provide model-agnostic support. We conducted interviews with data science practitioners in industry to collect user requirements and identify opportunities for leveraging interactive visual support. We present HyperTuner, a prototype system that supports hyperparameter search and analysis via interactive visual analytics. The design treats models as black boxes with the hyperparameters and data as inputs, and the predictions and performance metrics as outputs. We discuss our preliminary evaluation results, where the data science practitioners deem HyperTuner as useful and desired to help gain insights into the influence of hyperparameters on model performance and convergence. The design also triggered additional requirements such as involving more advanced support for automated tuning and debugging.
在训练机器学习模型时,数据科学家通常需要确定一些超参数来建立模型。超参数的值决定了模型的结构和其他特征,对训练结果有显著影响。然而,考虑到模型算法和训练过程的复杂性,在超参数空间中为特定问题识别最佳点可能具有挑战性。本文描述了用户对超参数调优的需求,并提出了一个原型系统来提供与模型无关的支持。我们与行业中的数据科学从业者进行了访谈,以收集用户需求并确定利用交互式可视化支持的机会。我们提出HyperTuner,一个原型系统,支持超参数搜索和分析,通过交互式可视化分析。该设计将模型视为黑盒,将超参数和数据作为输入,将预测和性能指标作为输出。我们讨论了我们的初步评估结果,其中数据科学从业者认为HyperTuner是有用的,并且希望有助于深入了解超参数对模型性能和收敛的影响。该设计还引发了额外的需求,例如涉及对自动调优和调试的更高级支持。
{"title":"HyperTuner: Visual Analytics for Hyperparameter Tuning by Professionals","authors":"Tianyi Li, G. Convertino, Wenbo Wang, Haley Most, Tristan Zajonc, Yi-Hsun Tsai","doi":"10.1109/MLUI52768.2018.10075647","DOIUrl":"https://doi.org/10.1109/MLUI52768.2018.10075647","url":null,"abstract":"While training a machine learning model, data scientists often need to determine some hyperparameters to set up the model. The values of hyperparameters configure the structure and other characteristics of the model and can significantly influence the training result. However, given the complexity of the model algorithms and the training processes, identifying a sweet spot in the hyperparameter space for a specific problem can be challenging. This paper characterizes user requirements for hyperparameter tuning and proposes a prototype system to provide model-agnostic support. We conducted interviews with data science practitioners in industry to collect user requirements and identify opportunities for leveraging interactive visual support. We present HyperTuner, a prototype system that supports hyperparameter search and analysis via interactive visual analytics. The design treats models as black boxes with the hyperparameters and data as inputs, and the predictions and performance metrics as outputs. We discuss our preliminary evaluation results, where the data science practitioners deem HyperTuner as useful and desired to help gain insights into the influence of hyperparameters on model performance and convergence. The design also triggered additional requirements such as involving more advanced support for automated tuning and debugging.","PeriodicalId":421877,"journal":{"name":"2018 IEEE Workshop on Machine Learning from User Interaction for Visualization and Analytics (MLUI)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117006186","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Providing Contextual Assistance in Response to Frustration in Visual Analytics Tasks 在视觉分析任务中提供上下文帮助以应对挫折
P. Panwar, A. Bradley, C. Collins
This paper proposes a method for helping users in visual analytic tasks by using machine learning to detect and respond to frustration and provide appropriate recommendations and guidance. We have collected an emotion dataset from 28 participants carrying out intentionally difficult visualization tasks and used it to build an interactive frustration state detection model which detects frustration using data streaming from a small wrist-worn skin conductance device and eye tracking. We present a work-in-progress design exploration for interventions appropriate to different intensities of frustrations detected by the model. The interaction method and the level of interruption and assistance can be adjusted in response to the intensity and longevity of detected user states.
本文提出了一种方法,通过使用机器学习来帮助用户进行视觉分析任务,以检测和响应挫折,并提供适当的建议和指导。我们收集了28名参与者的情绪数据集,这些参与者都在执行有意困难的可视化任务,并用它来构建一个交互式沮丧状态检测模型,该模型使用来自手腕上的小型皮肤电导设备和眼动追踪的数据流来检测沮丧感。我们提出了一项正在进行的设计探索,以适合模型检测到的不同挫折强度的干预措施。可以根据检测到的用户状态的强度和寿命调整交互方法以及中断和辅助的水平。
{"title":"Providing Contextual Assistance in Response to Frustration in Visual Analytics Tasks","authors":"P. Panwar, A. Bradley, C. Collins","doi":"10.1109/MLUI52768.2018.10075561","DOIUrl":"https://doi.org/10.1109/MLUI52768.2018.10075561","url":null,"abstract":"This paper proposes a method for helping users in visual analytic tasks by using machine learning to detect and respond to frustration and provide appropriate recommendations and guidance. We have collected an emotion dataset from 28 participants carrying out intentionally difficult visualization tasks and used it to build an interactive frustration state detection model which detects frustration using data streaming from a small wrist-worn skin conductance device and eye tracking. We present a work-in-progress design exploration for interventions appropriate to different intensities of frustrations detected by the model. The interaction method and the level of interruption and assistance can be adjusted in response to the intensity and longevity of detected user states.","PeriodicalId":421877,"journal":{"name":"2018 IEEE Workshop on Machine Learning from User Interaction for Visualization and Analytics (MLUI)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127073413","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
A Human-in-the-Loop Software Platform 人在循环软件平台
Fang Cao, David J. Scroggins, Lebna V. Thomas, Eli T. Brown
Human-in-the-Loop (HIL) analytics systems blend the intuitive sensemaking abilities of humans with the raw number-crunching capability of machine learning. The web and front-end visualization libraries, such as D3.js, make it easier than ever to develop cross-platform HIL systems for wide distribution. Analytics toolkits such as scikit-learn provide straightforward, coherent interfaces for a variety of machine learning algorithms. However, creating novel HIL systems requires expertise in a range of skills including data visualization, web engineering, and machine learning. The Library for Interactive Human-Computer Analytics (LIHCA) is a platform to simplify creating applications that use interactive visualizations to steer back-end machine learners. Developers can enhance their interactive visualizations by connecting to a LIHCA API back end that manages data, runs machine learning algorithms, and returns the results in a visualization-convenient format. We provide a discussion of design considerations for HIL systems, an implementation of LIHCA to satisfy those considerations, and a set of implemented examples to illustrate the usage of the library.
人在循环(HIL)分析系统将人类的直觉感知能力与机器学习的原始数字处理能力融合在一起。web和前端可视化库,如D3.js,使开发跨平台的HIL系统变得比以往任何时候都更容易。诸如scikit-learn之类的分析工具包为各种机器学习算法提供了简单、连贯的接口。然而,创建新颖的HIL系统需要一系列技能方面的专业知识,包括数据可视化、网络工程和机器学习。交互式人机分析库(LIHCA)是一个平台,用于简化创建使用交互式可视化来引导后端机器学习者的应用程序。开发人员可以通过连接到LIHCA API后端来增强他们的交互式可视化,LIHCA API后端管理数据、运行机器学习算法并以可视化方便的格式返回结果。我们讨论了HIL系统的设计注意事项,LIHCA的实现以满足这些注意事项,并提供了一组实现示例来说明该库的使用。
{"title":"A Human-in-the-Loop Software Platform","authors":"Fang Cao, David J. Scroggins, Lebna V. Thomas, Eli T. Brown","doi":"10.1109/MLUI52768.2018.10075650","DOIUrl":"https://doi.org/10.1109/MLUI52768.2018.10075650","url":null,"abstract":"Human-in-the-Loop (HIL) analytics systems blend the intuitive sensemaking abilities of humans with the raw number-crunching capability of machine learning. The web and front-end visualization libraries, such as D3.js, make it easier than ever to develop cross-platform HIL systems for wide distribution. Analytics toolkits such as scikit-learn provide straightforward, coherent interfaces for a variety of machine learning algorithms. However, creating novel HIL systems requires expertise in a range of skills including data visualization, web engineering, and machine learning. The Library for Interactive Human-Computer Analytics (LIHCA) is a platform to simplify creating applications that use interactive visualizations to steer back-end machine learners. Developers can enhance their interactive visualizations by connecting to a LIHCA API back end that manages data, runs machine learning algorithms, and returns the results in a visualization-convenient format. We provide a discussion of design considerations for HIL systems, an implementation of LIHCA to satisfy those considerations, and a set of implemented examples to illustrate the usage of the library.","PeriodicalId":421877,"journal":{"name":"2018 IEEE Workshop on Machine Learning from User Interaction for Visualization and Analytics (MLUI)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116935267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Speculative Execution for Guided Visual Analytics 引导视觉分析的投机执行
F. Sperrle, J. Bernard, M. Sedlmair, D. Keim, Mennatallah El-Assady
We propose the concept of Speculative Execution for Visual Analytics and discuss its effectiveness for model exploration and optimization. Speculative Execution enables the automatic generation of alternative, competing model configurations that do not alter the current model state unless explicitly confirmed by the user. These alternatives are computed based on either user interactions or model quality measures and can be explored using delta-visualizations. By automatically proposing modeling alternatives, systems employing Speculative Execution can shorten the gap between users and models, reduce the confirmation bias and speed up optimization processes. In this paper, we have assembled five application scenarios showcasing the potential of Speculative Execution, as well as a potential for further research.
我们提出了可视化分析的推测执行概念,并讨论了其对模型探索和优化的有效性。投机执行允许自动生成可选的、相互竞争的模型配置,除非用户明确确认,否则这些配置不会改变当前模型状态。这些选择是基于用户交互或模型质量度量来计算的,并且可以使用增量可视化来探索。通过自动提出建模备选方案,采用Speculative Execution的系统可以缩短用户和模型之间的差距,减少确认偏差,加快优化过程。在本文中,我们汇集了五个应用场景,展示了投机执行的潜力,以及进一步研究的潜力。
{"title":"Speculative Execution for Guided Visual Analytics","authors":"F. Sperrle, J. Bernard, M. Sedlmair, D. Keim, Mennatallah El-Assady","doi":"10.1109/MLUI52768.2018.10075559","DOIUrl":"https://doi.org/10.1109/MLUI52768.2018.10075559","url":null,"abstract":"We propose the concept of Speculative Execution for Visual Analytics and discuss its effectiveness for model exploration and optimization. Speculative Execution enables the automatic generation of alternative, competing model configurations that do not alter the current model state unless explicitly confirmed by the user. These alternatives are computed based on either user interactions or model quality measures and can be explored using delta-visualizations. By automatically proposing modeling alternatives, systems employing Speculative Execution can shorten the gap between users and models, reduce the confirmation bias and speed up optimization processes. In this paper, we have assembled five application scenarios showcasing the potential of Speculative Execution, as well as a potential for further research.","PeriodicalId":421877,"journal":{"name":"2018 IEEE Workshop on Machine Learning from User Interaction for Visualization and Analytics (MLUI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129009005","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Using Hidden Markov Models to Determine Cognitive States of Visual Analytic Users 利用隐马尔可夫模型确定视觉分析用户的认知状态
M. Aboufoul, Ryan Wesslen, Isaac Cho, Wenwen Dou, Samira Shaikh
Many visual analytics tools exist to assist users in examining large amounts of information at once via coordinated views that include graphs, network connections and maps. However, the cognitive processes that those users undergo while using such tools remain a mystery. Many psychological studies suggest that individuals may undergo some planning stage followed by analysis before finally making conclusions when examining large amounts of analytical data with the goal of reaching a decision. While the general order of these cognitive states has been theorized, the exact states of individuals at specific points during their interaction with visual analytic systems remain unclear. In this work, we developed models to determine the cognitive states of users based solely on their interactions with visual analytics systems via Hidden Markov Models. Hidden Markov Models allow for the classification of observations through hidden states (cognitive states in our case) as well as the prediction of future cognitive states. We generate these models through unsupervised learning and use established metrics such as AIC and BIC metrics to evaluate our models. Our solutions are designed to help improve visual analytics tools by providing a better understanding of cognitive thought processes of users during data intensive analysis tasks.
许多可视化分析工具可以帮助用户通过协调视图(包括图形、网络连接和地图)一次检查大量信息。然而,这些用户在使用这些工具时所经历的认知过程仍然是一个谜。许多心理学研究表明,当为了做出决定而检查大量的分析数据时,个人在最终得出结论之前可能会经历一些计划阶段,然后进行分析。虽然这些认知状态的一般顺序已被理论化,但在与视觉分析系统交互过程中,个体在特定点的确切状态仍不清楚。在这项工作中,我们开发了模型来确定用户的认知状态,仅基于他们通过隐马尔可夫模型与视觉分析系统的交互。隐马尔可夫模型允许通过隐藏状态(在我们的例子中是认知状态)对观察结果进行分类,以及对未来认知状态的预测。我们通过无监督学习生成这些模型,并使用既定的指标,如AIC和BIC指标来评估我们的模型。我们的解决方案旨在通过在数据密集型分析任务中更好地理解用户的认知思维过程来帮助改进可视化分析工具。
{"title":"Using Hidden Markov Models to Determine Cognitive States of Visual Analytic Users","authors":"M. Aboufoul, Ryan Wesslen, Isaac Cho, Wenwen Dou, Samira Shaikh","doi":"10.1109/MLUI52768.2018.10075648","DOIUrl":"https://doi.org/10.1109/MLUI52768.2018.10075648","url":null,"abstract":"Many visual analytics tools exist to assist users in examining large amounts of information at once via coordinated views that include graphs, network connections and maps. However, the cognitive processes that those users undergo while using such tools remain a mystery. Many psychological studies suggest that individuals may undergo some planning stage followed by analysis before finally making conclusions when examining large amounts of analytical data with the goal of reaching a decision. While the general order of these cognitive states has been theorized, the exact states of individuals at specific points during their interaction with visual analytic systems remain unclear. In this work, we developed models to determine the cognitive states of users based solely on their interactions with visual analytics systems via Hidden Markov Models. Hidden Markov Models allow for the classification of observations through hidden states (cognitive states in our case) as well as the prediction of future cognitive states. We generate these models through unsupervised learning and use established metrics such as AIC and BIC metrics to evaluate our models. Our solutions are designed to help improve visual analytics tools by providing a better understanding of cognitive thought processes of users during data intensive analysis tasks.","PeriodicalId":421877,"journal":{"name":"2018 IEEE Workshop on Machine Learning from User Interaction for Visualization and Analytics (MLUI)","volume":"124 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128173885","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Computer-supported Interactive Assignment of Keywords for Literature Collections 计算机支持的文献馆藏关键词交互赋值
S. Agarwal, Fabian Beck
A curated literature collection on a specific topic helps researchers to find relevant articles quickly. Assigning multiple keywords to each article is one of the techniques to structure such a collection. But it is challenging to assign all the keywords consistently without any gaps or ambiguities. We propose to support the user with a machine learning technique that suggests keywords for articles in a literature collection browser. We provide visual explanations to make the keyword suggestions transparent. The suggestions are based on previous keyword assignments. The machine learning technique learns on the fly from the interactive assignments of the user. We seamlessly integrate the proposed technique in an existing literature collection browser and investigate various usage scenarios through an early prototype.
对特定主题的文献收集有助于研究人员快速找到相关文章。为每篇文章分配多个关键字是构建此类集合的技术之一。但是,要一致地分配所有关键字而没有任何空白或歧义是具有挑战性的。我们建议使用一种机器学习技术来支持用户,该技术可以为文学收藏浏览器中的文章提供关键词建议。我们提供可视化的解释,使关键字建议透明。这些建议是基于以前的关键字分配。机器学习技术从用户的交互式任务中动态学习。我们将提出的技术无缝集成到现有的文献收集浏览器中,并通过早期原型研究各种使用场景。
{"title":"Computer-supported Interactive Assignment of Keywords for Literature Collections","authors":"S. Agarwal, Fabian Beck","doi":"10.1109/MLUI52768.2018.10075564","DOIUrl":"https://doi.org/10.1109/MLUI52768.2018.10075564","url":null,"abstract":"A curated literature collection on a specific topic helps researchers to find relevant articles quickly. Assigning multiple keywords to each article is one of the techniques to structure such a collection. But it is challenging to assign all the keywords consistently without any gaps or ambiguities. We propose to support the user with a machine learning technique that suggests keywords for articles in a literature collection browser. We provide visual explanations to make the keyword suggestions transparent. The suggestions are based on previous keyword assignments. The machine learning technique learns on the fly from the interactive assignments of the user. We seamlessly integrate the proposed technique in an existing literature collection browser and investigate various usage scenarios through an early prototype.","PeriodicalId":421877,"journal":{"name":"2018 IEEE Workshop on Machine Learning from User Interaction for Visualization and Analytics (MLUI)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127910699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
A Bidirectional Pipeline for Semantic Interaction 语义交互的双向管道
Michelle Dowling, John E. Wenskovitch, P. Hauck, A. Binford, Nicholas F. Polys, Chris North
Semantic interaction techniques in visual analytics tools allow analysts to indirectly adjust model parameters by directly manipulating the visual output of the models. Many existing tools that support semantic interaction do so with a number of similar features, including using a set of mathematical models that are composed within a pipeline, having a semantic interaction be interpreted by an inverse computation of one or more mathematical models, and using an underlying bidirectional structure within the pipeline. We propose a new visual analytics pipeline that captures these necessary features of semantic interactions. To demonstrate how this pipeline can be used, we represent existing visual analytics tools and their semantic interactions within this pipeline. We also explore a series of new visual analytics tools with semantic interaction to highlight how the new pipeline can represent new research as well.
可视化分析工具中的语义交互技术允许分析人员通过直接操作模型的可视化输出来间接调整模型参数。许多支持语义交互的现有工具都使用了许多类似的功能,包括使用一组在管道内组成的数学模型,通过一个或多个数学模型的逆计算来解释语义交互,以及使用管道内的底层双向结构。我们提出了一种新的可视化分析管道,可以捕获语义交互的这些必要特征。为了演示如何使用该管道,我们表示了现有的可视化分析工具及其在该管道中的语义交互。我们还探索了一系列具有语义交互的新的可视化分析工具,以强调新的管道如何也能代表新的研究。
{"title":"A Bidirectional Pipeline for Semantic Interaction","authors":"Michelle Dowling, John E. Wenskovitch, P. Hauck, A. Binford, Nicholas F. Polys, Chris North","doi":"10.1109/MLUI52768.2018.10075562","DOIUrl":"https://doi.org/10.1109/MLUI52768.2018.10075562","url":null,"abstract":"Semantic interaction techniques in visual analytics tools allow analysts to indirectly adjust model parameters by directly manipulating the visual output of the models. Many existing tools that support semantic interaction do so with a number of similar features, including using a set of mathematical models that are composed within a pipeline, having a semantic interaction be interpreted by an inverse computation of one or more mathematical models, and using an underlying bidirectional structure within the pipeline. We propose a new visual analytics pipeline that captures these necessary features of semantic interactions. To demonstrate how this pipeline can be used, we represent existing visual analytics tools and their semantic interactions within this pipeline. We also explore a series of new visual analytics tools with semantic interaction to highlight how the new pipeline can represent new research as well.","PeriodicalId":421877,"journal":{"name":"2018 IEEE Workshop on Machine Learning from User Interaction for Visualization and Analytics (MLUI)","volume":"89 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126464223","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
期刊
2018 IEEE Workshop on Machine Learning from User Interaction for Visualization and Analytics (MLUI)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1