首页 > 最新文献

Proceedings of the Fifth Workshop on Beyond Time and Errors: Novel Evaluation Methods for Visualization最新文献

英文 中文
More bang for your research buck: toward recommender systems for visual analytics
L. Blaha, Dustin L. Arendt, Fairul Mohd-Zaid
We propose a set of common sense steps required to develop a recommender system for visual analytics. Such a system is an essential way to get additional mileage out of costly user studies, which are typically archived post publication. Crucially, we propose conducting user studies in a manner that allows machine learning techniques to elucidate relationships between experimental data (i.e., user performance) and metrics about the data being visualized and candidate visual representations. We execute a case study within our framework to extract simple rules of thumb that relate different data metrics and visualization characteristics to patterns of user errors on several network analysis tasks. Our case study suggests a research agenda supporting the development of general, robust visualization recommender systems.
我们提出了一套常识性的步骤,需要开发一个推荐系统的视觉分析。这种系统是从昂贵的用户研究中获得额外收益的重要途径,这些研究通常是在出版后存档的。至关重要的是,我们建议以一种允许机器学习技术阐明实验数据(即用户性能)与有关可视化数据和候选视觉表示的指标之间关系的方式进行用户研究。我们在我们的框架内执行了一个案例研究,以提取简单的经验规则,这些规则将不同的数据度量和可视化特征与几个网络分析任务中的用户错误模式联系起来。我们的案例研究提出了一个支持开发通用的、健壮的可视化推荐系统的研究议程。
{"title":"More bang for your research buck: toward recommender systems for visual analytics","authors":"L. Blaha, Dustin L. Arendt, Fairul Mohd-Zaid","doi":"10.1145/2669557.2669566","DOIUrl":"https://doi.org/10.1145/2669557.2669566","url":null,"abstract":"We propose a set of common sense steps required to develop a recommender system for visual analytics. Such a system is an essential way to get additional mileage out of costly user studies, which are typically archived post publication. Crucially, we propose conducting user studies in a manner that allows machine learning techniques to elucidate relationships between experimental data (i.e., user performance) and metrics about the data being visualized and candidate visual representations. We execute a case study within our framework to extract simple rules of thumb that relate different data metrics and visualization characteristics to patterns of user errors on several network analysis tasks. Our case study suggests a research agenda supporting the development of general, robust visualization recommender systems.","PeriodicalId":179584,"journal":{"name":"Proceedings of the Fifth Workshop on Beyond Time and Errors: Novel Evaluation Methods for Visualization","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131552571","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Just the other side of the coin?: from error- to insight-analysis 只是硬币的另一面吗?从错误分析到洞察分析
M. Smuc
To shed more light on data explorers dealing with complex information visualizations in real world scenarios, new methodologies and models are needed which overcome existing explanatory gaps. Therefore, a novel model to analyze users' errors and insights is outlined that is derived from Rasmussen's model on different levels of cognitive processing, and integrates explorers' skills, schemes, and knowledge. After locating this model in the landscape of theories for visual analytics, the main building blocks of the model, where three cognitive processing levels are interlinked, are described in detail. Finally, its applicability, challenges in measurement and future research options are discussed.
为了让数据探索者在现实世界场景中处理复杂的信息可视化,需要新的方法和模型来克服现有的解释差距。因此,本文概述了一种新的模型来分析用户的错误和见解,该模型源自Rasmussen在不同认知加工水平上的模型,并整合了探索者的技能、方案和知识。在将该模型定位于视觉分析理论之后,详细描述了该模型的主要构建模块,其中三个认知处理水平是相互关联的。最后,讨论了其适用性、测量中的挑战和未来的研究选择。
{"title":"Just the other side of the coin?: from error- to insight-analysis","authors":"M. Smuc","doi":"10.1145/2669557.2669570","DOIUrl":"https://doi.org/10.1145/2669557.2669570","url":null,"abstract":"To shed more light on data explorers dealing with complex information visualizations in real world scenarios, new methodologies and models are needed which overcome existing explanatory gaps. Therefore, a novel model to analyze users' errors and insights is outlined that is derived from Rasmussen's model on different levels of cognitive processing, and integrates explorers' skills, schemes, and knowledge. After locating this model in the landscape of theories for visual analytics, the main building blocks of the model, where three cognitive processing levels are interlinked, are described in detail. Finally, its applicability, challenges in measurement and future research options are discussed.","PeriodicalId":179584,"journal":{"name":"Proceedings of the Fifth Workshop on Beyond Time and Errors: Novel Evaluation Methods for Visualization","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132172862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Sanity check for class-coloring-based evaluation of dimension reduction techniques 基于类颜色的降维技术评估的完整性检查
Michaël Aupetit
Dimension Reduction techniques used to visualize multidimensional data provide a scatterplot spatialization of data similarities. A widespread way to evaluate the quality of such DR techniques is to use labeled data as a ground truth and to call the reader as a witness to qualify the visualization by looking at class-cluster correlations within the scatterplot. We expose the pitfalls of this evaluation process and we propose a principled solution to guide researchers to improve the way they use this visual evaluation of DR techniques.
用于可视化多维数据的降维技术提供了数据相似度的散点图空间化。评估此类DR技术质量的一种普遍方法是使用标记数据作为基础事实,并通过查看散点图中的类-簇相关性,将读者称为见证人来验证可视化。我们揭示了这种评估过程的陷阱,并提出了一个原则性的解决方案,以指导研究人员改进他们使用这种DR技术的视觉评估方式。
{"title":"Sanity check for class-coloring-based evaluation of dimension reduction techniques","authors":"Michaël Aupetit","doi":"10.1145/2669557.2669578","DOIUrl":"https://doi.org/10.1145/2669557.2669578","url":null,"abstract":"Dimension Reduction techniques used to visualize multidimensional data provide a scatterplot spatialization of data similarities. A widespread way to evaluate the quality of such DR techniques is to use labeled data as a ground truth and to call the reader as a witness to qualify the visualization by looking at class-cluster correlations within the scatterplot. We expose the pitfalls of this evaluation process and we propose a principled solution to guide researchers to improve the way they use this visual evaluation of DR techniques.","PeriodicalId":179584,"journal":{"name":"Proceedings of the Fifth Workshop on Beyond Time and Errors: Novel Evaluation Methods for Visualization","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116894669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
Towards analyzing eye tracking data for evaluating interactive visualization systems 分析眼动追踪数据以评估交互式可视化系统
Tanja Blascheck, T. Ertl
Eye tracking can be a suitable evaluation method for determining which regions and objects of a stimulus a human viewer perceived. Analysts can use eye tracking as a complement to other evaluation methods for a more holistic assessment of novel visualization techniques beyond time and error measures. Up to now, most stimuli in eye tracking are either static stimuli or videos. Since interaction is an integral part of visualization, an evaluation should include interaction. In this paper, we present an extensive literature review on evaluation methods for interactive visualizations. Based on the literature review we propose ideas for analyzing eye movement data from interactive stimuli. This requires looking critically at challenges induced by interactive stimuli. The first step is to collect data using different study methods. In our case, we look at using eye tracking, interaction logs, and thinking-aloud protocols. In addition, this requires a thorough synchronization of the mentioned study methods. To analyze the collected data new analysis techniques have to be developed. We investigate existing approaches and how we can adapt them to new data types as well as sketch ideas how new approaches can look like.
眼动追踪可以作为确定人类观看者感知刺激的哪些区域和对象的合适评估方法。分析人员可以使用眼动追踪作为其他评估方法的补充,对新的可视化技术进行更全面的评估,超越时间和误差测量。目前,眼动追踪中的大多数刺激都是静态刺激或视频。由于交互是可视化的一个组成部分,因此评估应该包括交互。在本文中,我们提出了广泛的文献综述评估方法的交互式可视化。在文献综述的基础上,我们提出了分析交互刺激下眼动数据的思路。这需要批判性地看待由互动刺激引起的挑战。第一步是使用不同的研究方法收集数据。在我们的案例中,我们着眼于使用眼动追踪、交互日志和大声思考协议。此外,这需要上述学习方法的彻底同步。为了分析收集到的数据,必须发展新的分析技术。我们研究了现有的方法,以及如何使它们适应新的数据类型,并概述了新方法的外观。
{"title":"Towards analyzing eye tracking data for evaluating interactive visualization systems","authors":"Tanja Blascheck, T. Ertl","doi":"10.1145/2669557.2669569","DOIUrl":"https://doi.org/10.1145/2669557.2669569","url":null,"abstract":"Eye tracking can be a suitable evaluation method for determining which regions and objects of a stimulus a human viewer perceived. Analysts can use eye tracking as a complement to other evaluation methods for a more holistic assessment of novel visualization techniques beyond time and error measures. Up to now, most stimuli in eye tracking are either static stimuli or videos. Since interaction is an integral part of visualization, an evaluation should include interaction. In this paper, we present an extensive literature review on evaluation methods for interactive visualizations. Based on the literature review we propose ideas for analyzing eye movement data from interactive stimuli. This requires looking critically at challenges induced by interactive stimuli. The first step is to collect data using different study methods. In our case, we look at using eye tracking, interaction logs, and thinking-aloud protocols. In addition, this requires a thorough synchronization of the mentioned study methods. To analyze the collected data new analysis techniques have to be developed. We investigate existing approaches and how we can adapt them to new data types as well as sketch ideas how new approaches can look like.","PeriodicalId":179584,"journal":{"name":"Proceedings of the Fifth Workshop on Beyond Time and Errors: Novel Evaluation Methods for Visualization","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120940303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Repeated measures design in crowdsourcing-based experiments for visualization 基于众包的可视化实验中的重复测量设计
A. Abdul-Rahman, Karl J. Proctor, Brian Duffy, Min Chen
Crowdsourcing platforms, such as Amazon's Mechanical Turk (MTurk), are providing visualization researchers with a new avenue for conducting empirical studies. While such platforms offer several advantages over lab-based studies, they also feature some "unknown" or "uncontrolled" variables, which could potentially introduce serious confounding effects in the resultant measurement data. In this paper, we present our experience of using repeated measures in three empirical studies using MTurk. Each study presented participants with a set of stimuli, each featuring a condition of an independent variable. Participants were exposed to stimuli repeatedly in a pseudo-random order through four trials and their responses were measured digitally. Only a small portion of the participants were able to perform with absolute consistency for all stimuli throughout each experiment. This suggests that a repeated measures design is highly desirable (if not essential) when designing empirical studies for crowdsourcing platforms. Additionally, the majority of participants performed their tasks with reasonable consistency when all stimuli in an experiment are considered collectively. In other words, to most participants, inconsistency occurred occasionally. This suggests that crowdsourcing remains a valid experimental environment, provided that one can integrate the means to observe and alleviate the potential confounding effects of "unknown" or "uncontrolled" variables in the design of the experiment.
众包平台,如亚马逊的Mechanical Turk (MTurk),为可视化研究人员提供了进行实证研究的新途径。虽然这些平台比实验室研究有很多优势,但它们也有一些“未知”或“不受控制”的变量,这可能会在最终的测量数据中引入严重的混淆效应。在本文中,我们介绍了我们使用MTurk在三个实证研究中使用重复测量的经验。每项研究都向参与者提供一组刺激,每个刺激都有一个自变量的条件。通过四次试验,参与者以伪随机顺序反复暴露于刺激下,他们的反应被数字化测量。只有一小部分参与者能够在每次实验中对所有刺激都保持绝对一致的表现。这表明,在为众包平台设计实证研究时,重复测量设计是非常可取的(如果不是必要的)。此外,当实验中的所有刺激被集体考虑时,大多数参与者以合理的一致性执行任务。换句话说,对大多数参与者来说,不一致偶尔会发生。这表明,众包仍然是一个有效的实验环境,只要人们可以在实验设计中整合观察和减轻“未知”或“不受控制”变量的潜在混淆效应的手段。
{"title":"Repeated measures design in crowdsourcing-based experiments for visualization","authors":"A. Abdul-Rahman, Karl J. Proctor, Brian Duffy, Min Chen","doi":"10.1145/2669557.2669561","DOIUrl":"https://doi.org/10.1145/2669557.2669561","url":null,"abstract":"Crowdsourcing platforms, such as Amazon's Mechanical Turk (MTurk), are providing visualization researchers with a new avenue for conducting empirical studies. While such platforms offer several advantages over lab-based studies, they also feature some \"unknown\" or \"uncontrolled\" variables, which could potentially introduce serious confounding effects in the resultant measurement data. In this paper, we present our experience of using repeated measures in three empirical studies using MTurk. Each study presented participants with a set of stimuli, each featuring a condition of an independent variable. Participants were exposed to stimuli repeatedly in a pseudo-random order through four trials and their responses were measured digitally. Only a small portion of the participants were able to perform with absolute consistency for all stimuli throughout each experiment. This suggests that a repeated measures design is highly desirable (if not essential) when designing empirical studies for crowdsourcing platforms. Additionally, the majority of participants performed their tasks with reasonable consistency when all stimuli in an experiment are considered collectively. In other words, to most participants, inconsistency occurred occasionally. This suggests that crowdsourcing remains a valid experimental environment, provided that one can integrate the means to observe and alleviate the potential confounding effects of \"unknown\" or \"uncontrolled\" variables in the design of the experiment.","PeriodicalId":179584,"journal":{"name":"Proceedings of the Fifth Workshop on Beyond Time and Errors: Novel Evaluation Methods for Visualization","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127934238","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Toward visualization-specific heuristic evaluation 面向可视化的启发式评估
Alvin E Tarrell, Ann L. Fruhling, R. Borgo, C. Forsell, G. Grinstein, J. Scholtz
This position paper describes heuristic evaluation as it relates to visualization and visual analytics. We review heuristic evaluation in general, then comment on previous process-based, performance-based, and framework-based efforts to adapt the method to visualization-specific needs. We postulate that the framework-based approach holds the most promise for future progress in development of visualization-specific heuristics, and propose a specific framework as a starting point. We then recommend a method for community involvement and input into the further development of the heuristic framework and more detailed design and evaluation guidelines.
这份立场文件描述了启发式评估,因为它与可视化和可视化分析有关。我们一般回顾启发式评估,然后评论以前基于过程的、基于性能的和基于框架的努力,以使该方法适应特定于可视化的需求。我们假设基于框架的方法对可视化特定启发式的未来发展最有希望,并提出了一个特定的框架作为起点。然后,我们推荐一种社区参与的方法,并将其投入到启发式框架的进一步开发以及更详细的设计和评估指南中。
{"title":"Toward visualization-specific heuristic evaluation","authors":"Alvin E Tarrell, Ann L. Fruhling, R. Borgo, C. Forsell, G. Grinstein, J. Scholtz","doi":"10.1145/2669557.2669580","DOIUrl":"https://doi.org/10.1145/2669557.2669580","url":null,"abstract":"This position paper describes heuristic evaluation as it relates to visualization and visual analytics. We review heuristic evaluation in general, then comment on previous process-based, performance-based, and framework-based efforts to adapt the method to visualization-specific needs. We postulate that the framework-based approach holds the most promise for future progress in development of visualization-specific heuristics, and propose a specific framework as a starting point. We then recommend a method for community involvement and input into the further development of the heuristic framework and more detailed design and evaluation guidelines.","PeriodicalId":179584,"journal":{"name":"Proceedings of the Fifth Workshop on Beyond Time and Errors: Novel Evaluation Methods for Visualization","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114277073","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Utility evaluation of models 模型效用评价
J. Scholtz, Oriana Love, M. Whiting, Duncan Hodges, Lia Emanuel, D. Fraser
In this paper, we present three case studies of utility evaluations of underlying models in software systems: a user-model, technical and social models both singly and in combination, and a research-based model for user identification. Each of the three cases used a different approach to evaluating the model and each had challenges to overcome in designing and implementing the evaluation. We describe the methods we used and challenges faced in designing the evaluation procedures, summarize the lessons learned, enumerate considerations for those undertaking such evaluations, and present directions for future work.
在本文中,我们提出了软件系统中基础模型的效用评估的三个案例研究:用户模型、单独和组合的技术和社会模型,以及基于研究的用户识别模型。这三个案例中的每一个都使用了不同的方法来评估模型,并且在设计和实现评估时都有需要克服的挑战。我们描述了我们在设计评估程序时使用的方法和面临的挑战,总结了吸取的教训,列举了那些进行此类评估的人的考虑因素,并提出了未来工作的方向。
{"title":"Utility evaluation of models","authors":"J. Scholtz, Oriana Love, M. Whiting, Duncan Hodges, Lia Emanuel, D. Fraser","doi":"10.1145/2669557.2669562","DOIUrl":"https://doi.org/10.1145/2669557.2669562","url":null,"abstract":"In this paper, we present three case studies of utility evaluations of underlying models in software systems: a user-model, technical and social models both singly and in combination, and a research-based model for user identification. Each of the three cases used a different approach to evaluating the model and each had challenges to overcome in designing and implementing the evaluation. We describe the methods we used and challenges faced in designing the evaluation procedures, summarize the lessons learned, enumerate considerations for those undertaking such evaluations, and present directions for future work.","PeriodicalId":179584,"journal":{"name":"Proceedings of the Fifth Workshop on Beyond Time and Errors: Novel Evaluation Methods for Visualization","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129461773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
User tasks for evaluation: untangling the terminology throughout visualization design and development 评估的用户任务:理清可视化设计和开发中的术语
A. Rind, W. Aigner, Markus Wagner, S. Miksch, T. Lammarsch
User tasks play a pivotal role in evaluation throughout visualization design and development. However, the term 'task' is used ambiguously within the visualization community. In this position paper, we critically analyze the relevant literature and systematically compare definitions for 'task' and the usage of related terminology. In doing so, we identify a three-dimensional conceptual space of user tasks in visualization. Using these dimensions, visualization researchers can better formulate their contributions which helps advance visualization as a whole.
用户任务在整个可视化设计和开发的评估中起着关键作用。然而,“任务”这个术语在可视化社区中使用得很含糊。在本文中,我们批判性地分析了相关文献,并系统地比较了“任务”的定义和相关术语的使用。在这样做的过程中,我们在可视化中确定了用户任务的三维概念空间。使用这些维度,可视化研究人员可以更好地制定他们的贡献,这有助于整体上推进可视化。
{"title":"User tasks for evaluation: untangling the terminology throughout visualization design and development","authors":"A. Rind, W. Aigner, Markus Wagner, S. Miksch, T. Lammarsch","doi":"10.1145/2669557.2669568","DOIUrl":"https://doi.org/10.1145/2669557.2669568","url":null,"abstract":"User tasks play a pivotal role in evaluation throughout visualization design and development. However, the term 'task' is used ambiguously within the visualization community. In this position paper, we critically analyze the relevant literature and systematically compare definitions for 'task' and the usage of related terminology. In doing so, we identify a three-dimensional conceptual space of user tasks in visualization. Using these dimensions, visualization researchers can better formulate their contributions which helps advance visualization as a whole.","PeriodicalId":179584,"journal":{"name":"Proceedings of the Fifth Workshop on Beyond Time and Errors: Novel Evaluation Methods for Visualization","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115894333","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Evaluation of information visualization techniques: analysing user experience with reaction cards 评价信息可视化技术:用反应卡分析用户体验
Tanja Mercun
The paper originates from the idea that in the field of information visualization, positive user experience is extremely important if we wish to see users adopt and engage with the novel information visualization tools. Suggesting the use of product reaction card method to evaluate user experience, the paper gives an example of FrbrVis prototype to demonstrate how the results of this method could be analysed and used for comparing different designs. The authors also propose five dimensions of user experience (UX) that could be gathered from reaction cards and conclude that the results from reaction cards mirror and add to other performance and preference indicators.
本文的出发点是,在信息可视化领域,如果我们希望看到用户采用和参与新的信息可视化工具,积极的用户体验是极其重要的。建议使用产品反应卡法来评估用户体验,本文给出了FrbrVis原型的例子,以说明如何分析该方法的结果并用于比较不同的设计。作者还提出了可以从反应卡中收集的用户体验(UX)的五个维度,并得出结论,反应卡的结果反映并增加了其他性能和偏好指标。
{"title":"Evaluation of information visualization techniques: analysing user experience with reaction cards","authors":"Tanja Mercun","doi":"10.1145/2669557.2669565","DOIUrl":"https://doi.org/10.1145/2669557.2669565","url":null,"abstract":"The paper originates from the idea that in the field of information visualization, positive user experience is extremely important if we wish to see users adopt and engage with the novel information visualization tools. Suggesting the use of product reaction card method to evaluate user experience, the paper gives an example of FrbrVis prototype to demonstrate how the results of this method could be analysed and used for comparing different designs. The authors also propose five dimensions of user experience (UX) that could be gathered from reaction cards and conclude that the results from reaction cards mirror and add to other performance and preference indicators.","PeriodicalId":179584,"journal":{"name":"Proceedings of the Fifth Workshop on Beyond Time and Errors: Novel Evaluation Methods for Visualization","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122798829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Value-driven evaluation of visualizations 价值驱动的可视化评估
J. Stasko
Existing evaluations of data visualizations often employ a series of low-level, detailed questions to be answered or benchmark tasks to be performed. While that methodology can be helpful to determine a visualization's usability, such evaluations overlook the key benefits that visualization uniquely provides over other data analysis methods. I propose a value-driven evaluation of visualizations in which a person illustrates a system's value through four important capabilities: minimizing the time to answer diverse questions, spurring the generation of insights and insightful questions, conveying the essence of the data, and generating confidence and knowledge about the data's domain and context. Additionally, I explain how interaction is instrumental in creating much of the value that can be found in visualizations.
现有的数据可视化评估通常采用一系列要回答的低级详细问题或要执行的基准测试任务。虽然这种方法有助于确定可视化的可用性,但这种评估忽略了可视化相对于其他数据分析方法的独特优势。我提出了一种价值驱动的可视化评估,在这种评估中,一个人通过四个重要的能力来说明系统的价值:最大限度地减少回答不同问题的时间,激发洞察力和有洞察力的问题的产生,传达数据的本质,以及产生关于数据领域和上下文的信心和知识。此外,我还解释了交互如何有助于创造可视化中可以找到的许多价值。
{"title":"Value-driven evaluation of visualizations","authors":"J. Stasko","doi":"10.1145/2669557.2669579","DOIUrl":"https://doi.org/10.1145/2669557.2669579","url":null,"abstract":"Existing evaluations of data visualizations often employ a series of low-level, detailed questions to be answered or benchmark tasks to be performed. While that methodology can be helpful to determine a visualization's usability, such evaluations overlook the key benefits that visualization uniquely provides over other data analysis methods. I propose a value-driven evaluation of visualizations in which a person illustrates a system's value through four important capabilities: minimizing the time to answer diverse questions, spurring the generation of insights and insightful questions, conveying the essence of the data, and generating confidence and knowledge about the data's domain and context. Additionally, I explain how interaction is instrumental in creating much of the value that can be found in visualizations.","PeriodicalId":179584,"journal":{"name":"Proceedings of the Fifth Workshop on Beyond Time and Errors: Novel Evaluation Methods for Visualization","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115080941","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 50
期刊
Proceedings of the Fifth Workshop on Beyond Time and Errors: Novel Evaluation Methods for Visualization
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1