Generative Data Models for Validation and Evaluation of Visualization Techniques

C. Schulz, Arlind Nocaj, Mennatallah El-Assady, S. Frey, Marcel Hlawatsch, Michael Hund, G. Karch, Rudolf Netzel, Christin Schätzle, Miriam Butt, D. Keim, T. Ertl, U. Brandes, D. Weiskopf
{"title":"Generative Data Models for Validation and Evaluation of Visualization Techniques","authors":"C. Schulz, Arlind Nocaj, Mennatallah El-Assady, S. Frey, Marcel Hlawatsch, Michael Hund, G. Karch, Rudolf Netzel, Christin Schätzle, Miriam Butt, D. Keim, T. Ertl, U. Brandes, D. Weiskopf","doi":"10.1145/2993901.2993907","DOIUrl":null,"url":null,"abstract":"We argue that there is a need for substantially more research on the use of generative data models in the validation and evaluation of visualization techniques. For example, user studies will require the display of representative and uncon-founded visual stimuli, while algorithms will need functional coverage and assessable benchmarks. However, data is often collected in a semi-automatic fashion or entirely hand-picked, which obscures the view of generality, impairs availability, and potentially violates privacy. There are some sub-domains of visualization that use synthetic data in the sense of generative data models, whereas others work with real-world-based data sets and simulations. Depending on the visualization domain, many generative data models are \"side projects\" as part of an ad-hoc validation of a techniques paper and thus neither reusable nor general-purpose. We review existing work on popular data collections and generative data models in visualization to discuss the opportunities and consequences for technique validation, evaluation, and experiment design. We distill handling and future directions, and discuss how we can engineer generative data models and how visualization research could benefit from more and better use of generative data models.","PeriodicalId":235801,"journal":{"name":"Workshop on Beyond Time and Errors: Novel Evaluation Methods for Visualization","volume":"21 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"25","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Workshop on Beyond Time and Errors: Novel Evaluation Methods for Visualization","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2993901.2993907","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 25

Abstract

We argue that there is a need for substantially more research on the use of generative data models in the validation and evaluation of visualization techniques. For example, user studies will require the display of representative and uncon-founded visual stimuli, while algorithms will need functional coverage and assessable benchmarks. However, data is often collected in a semi-automatic fashion or entirely hand-picked, which obscures the view of generality, impairs availability, and potentially violates privacy. There are some sub-domains of visualization that use synthetic data in the sense of generative data models, whereas others work with real-world-based data sets and simulations. Depending on the visualization domain, many generative data models are "side projects" as part of an ad-hoc validation of a techniques paper and thus neither reusable nor general-purpose. We review existing work on popular data collections and generative data models in visualization to discuss the opportunities and consequences for technique validation, evaluation, and experiment design. We distill handling and future directions, and discuss how we can engineer generative data models and how visualization research could benefit from more and better use of generative data models.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
用于验证和评估可视化技术的生成数据模型
我们认为,在可视化技术的验证和评估中,有必要对生成数据模型的使用进行更多的研究。例如,用户研究将需要显示具有代表性和无混淆的视觉刺激,而算法将需要功能覆盖和可评估的基准。然而,数据通常以半自动的方式或完全手工挑选的方式收集,这模糊了通用性的观点,损害了可用性,并可能侵犯隐私。可视化的一些子领域在生成数据模型的意义上使用合成数据,而其他子领域则使用基于现实世界的数据集和模拟。根据可视化领域的不同,许多生成数据模型都是“副项目”,作为技术论文的特别验证的一部分,因此既不能重用,也不能通用。我们回顾了可视化中流行的数据收集和生成数据模型的现有工作,以讨论技术验证、评估和实验设计的机会和后果。我们总结了处理和未来的方向,并讨论了我们如何设计生成数据模型,以及可视化研究如何从更多更好地使用生成数据模型中受益。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Lowering the Barrier for Successful Replication and Evaluation A Nested Workflow Model for Visual Analytics Design and Validation Cognitive Stages in Visual Data Exploration Evaluating Information Visualization on Mobile Devices: Gaps and Challenges in the Empirical Evaluation Design Space Action Design Research and Visualization Design
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1