首页 > 最新文献

IEEE Transactions on Visualization and Computer Graphics最新文献

英文 中文
Contents 内容
IF 5.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2021-02-01 DOI: 10.1109/tvcg.2020.3033677
{"title":"Contents","authors":"","doi":"10.1109/tvcg.2020.3033677","DOIUrl":"https://doi.org/10.1109/tvcg.2020.3033677","url":null,"abstract":"","PeriodicalId":13376,"journal":{"name":"IEEE Transactions on Visualization and Computer Graphics","volume":" ","pages":""},"PeriodicalIF":5.2,"publicationDate":"2021-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/tvcg.2020.3033677","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43763541","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
VIS 2020 Steering Committees VIS 2020指导委员会
IF 5.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2021-02-01 DOI: 10.1109/tvcg.2020.3033716
{"title":"VIS 2020 Steering Committees","authors":"","doi":"10.1109/tvcg.2020.3033716","DOIUrl":"https://doi.org/10.1109/tvcg.2020.3033716","url":null,"abstract":"","PeriodicalId":13376,"journal":{"name":"IEEE Transactions on Visualization and Computer Graphics","volume":"1 1","pages":""},"PeriodicalIF":5.2,"publicationDate":"2021-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41521558","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Objective Observer-Relative Flow Visualization in Curved Spaces for Unsteady 2D Geophysical Flows. 目的非定常二维地球物理流动曲线空间的观察者相对流动可视化。
IF 5.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2021-02-01 Epub Date: 2021-01-28 DOI: 10.1109/TVCG.2020.3030454
Peter Rautek, Matej Mlejnek, Johanna Beyer, Jakob Troidl, Hanspeter Pfister, Thomas Theubl, Markus Hadwiger

Computing and visualizing features in fluid flow often depends on the observer, or reference frame, relative to which the input velocity field is given. A desired property of feature detectors is therefore that they are objective, meaning independent of the input reference frame. However, the standard definition of objectivity is only given for Euclidean domains and cannot be applied in curved spaces. We build on methods from mathematical physics and Riemannian geometry to generalize objectivity to curved spaces, using the powerful notion of symmetry groups as the basis for definition. From this, we develop a general mathematical framework for the objective computation of observer fields for curved spaces, relative to which other computed measures become objective. An important property of our framework is that it works intrinsically in 2D, instead of in the 3D ambient space. This enables a direct generalization of the 2D computation via optimization of observer fields in flat space to curved domains, without having to perform optimization in 3D. We specifically develop the case of unsteady 2D geophysical flows given on spheres, such as the Earth. Our observer fields in curved spaces then enable objective feature computation as well as the visualization of the time evolution of scalar and vector fields, such that the automatically computed reference frames follow moving structures like vortices in a way that makes them appear to be steady.

流体流动特征的计算和可视化通常依赖于给定输入速度场的观察者或参照系。因此,特征检测器的一个理想特性是它们是客观的,这意味着它们独立于输入参考系。然而,客观性的标准定义仅适用于欧几里得域,而不能应用于弯曲空间。我们建立在数学物理和黎曼几何的方法上,将客观性推广到弯曲空间,使用对称群的强大概念作为定义的基础。在此基础上,我们建立了一个广义的数学框架,用于曲面空间观测器场的客观计算,与此相对,其他计算度量成为客观的。我们框架的一个重要属性是,它本质上是在2D环境中工作,而不是在3D环境空间中。这使得通过将平面空间中的观察者场优化到弯曲域来直接推广二维计算,而无需在3D中进行优化。我们特别发展了非定常二维地球物理流在球体上的情况,例如地球。然后,我们在弯曲空间中的观察者场可以实现客观特征计算以及标量场和向量场的时间演变的可视化,这样,自动计算的参考系就会以一种使它们看起来稳定的方式跟随像漩涡这样的运动结构。
{"title":"Objective Observer-Relative Flow Visualization in Curved Spaces for Unsteady 2D Geophysical Flows.","authors":"Peter Rautek,&nbsp;Matej Mlejnek,&nbsp;Johanna Beyer,&nbsp;Jakob Troidl,&nbsp;Hanspeter Pfister,&nbsp;Thomas Theubl,&nbsp;Markus Hadwiger","doi":"10.1109/TVCG.2020.3030454","DOIUrl":"https://doi.org/10.1109/TVCG.2020.3030454","url":null,"abstract":"<p><p>Computing and visualizing features in fluid flow often depends on the observer, or reference frame, relative to which the input velocity field is given. A desired property of feature detectors is therefore that they are objective, meaning independent of the input reference frame. However, the standard definition of objectivity is only given for Euclidean domains and cannot be applied in curved spaces. We build on methods from mathematical physics and Riemannian geometry to generalize objectivity to curved spaces, using the powerful notion of symmetry groups as the basis for definition. From this, we develop a general mathematical framework for the objective computation of observer fields for curved spaces, relative to which other computed measures become objective. An important property of our framework is that it works intrinsically in 2D, instead of in the 3D ambient space. This enables a direct generalization of the 2D computation via optimization of observer fields in flat space to curved domains, without having to perform optimization in 3D. We specifically develop the case of unsteady 2D geophysical flows given on spheres, such as the Earth. Our observer fields in curved spaces then enable objective feature computation as well as the visualization of the time evolution of scalar and vector fields, such that the automatically computed reference frames follow moving structures like vortices in a way that makes them appear to be steady.</p>","PeriodicalId":13376,"journal":{"name":"IEEE Transactions on Visualization and Computer Graphics","volume":" ","pages":"283-293"},"PeriodicalIF":5.2,"publicationDate":"2021-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TVCG.2020.3030454","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38486683","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
PlotThread: Creating Expressive Storyline Visualizations using Reinforcement Learning. PlotThread:使用强化学习创建富有表现力的故事情节可视化。
IF 5.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2021-02-01 Epub Date: 2021-01-28 DOI: 10.1109/TVCG.2020.3030467
Tan Tang, Renzhong Li, Xinke Wu, Shuhan Liu, Johannes Knittel, Steffen Koch, Lingyun Yu, Peiran Ren, Thomas Ertl, Yingcai Wu
Storyline visualizations are an effective means to present the evolution of plots and reveal the scenic interactions among characters. However, the design of storyline visualizations is a difficult task as users need to balance between aesthetic goals and narrative constraints. Despite that the optimization-based methods have been improved significantly in terms of producing aesthetic and legible layouts, the existing (semi-) automatic methods are still limited regarding 1) efficient exploration of the storyline design space and 2) flexible customization of storyline layouts. In this work, we propose a reinforcement learning framework to train an AI agent that assists users in exploring the design space efficiently and generating well-optimized storylines. Based on the framework, we introduce PlotThread, an authoring tool that integrates a set of flexible interactions to support easy customization of storyline visualizations. To seamlessly integrate the AI agent into the authoring process, we employ a mixed-initiative approach where both the agent and designers work on the same canvas to boost the collaborative design of storylines. We evaluate the reinforcement learning model through qualitative and quantitative experiments and demonstrate the usage of PlotThread using a collection of use cases.
故事情节可视化是呈现故事情节演变、揭示人物互动场景的有效手段。然而,故事情节的可视化设计是一项艰巨的任务,因为用户需要在审美目标和叙事约束之间取得平衡。尽管基于优化的方法在生成美观易读的布局方面有了很大的改进,但现有的(半)自动化方法在1)高效地探索故事情节设计空间和2)灵活地定制故事情节布局方面仍然存在局限性。在这项工作中,我们提出了一个强化学习框架来训练一个人工智能代理,帮助用户有效地探索设计空间并生成优化的故事情节。基于该框架,我们介绍了PlotThread,一个集成了一组灵活交互的创作工具,以支持易于定制的故事情节可视化。为了无缝地将AI代理集成到创作过程中,我们采用了混合主动方法,即代理和设计师都在同一画布上工作,以促进故事情节的协作设计。我们通过定性和定量实验来评估强化学习模型,并使用一组用例来演示PlotThread的使用。
{"title":"PlotThread: Creating Expressive Storyline Visualizations using Reinforcement Learning.","authors":"Tan Tang,&nbsp;Renzhong Li,&nbsp;Xinke Wu,&nbsp;Shuhan Liu,&nbsp;Johannes Knittel,&nbsp;Steffen Koch,&nbsp;Lingyun Yu,&nbsp;Peiran Ren,&nbsp;Thomas Ertl,&nbsp;Yingcai Wu","doi":"10.1109/TVCG.2020.3030467","DOIUrl":"https://doi.org/10.1109/TVCG.2020.3030467","url":null,"abstract":"Storyline visualizations are an effective means to present the evolution of plots and reveal the scenic interactions among characters. However, the design of storyline visualizations is a difficult task as users need to balance between aesthetic goals and narrative constraints. Despite that the optimization-based methods have been improved significantly in terms of producing aesthetic and legible layouts, the existing (semi-) automatic methods are still limited regarding 1) efficient exploration of the storyline design space and 2) flexible customization of storyline layouts. In this work, we propose a reinforcement learning framework to train an AI agent that assists users in exploring the design space efficiently and generating well-optimized storylines. Based on the framework, we introduce PlotThread, an authoring tool that integrates a set of flexible interactions to support easy customization of storyline visualizations. To seamlessly integrate the AI agent into the authoring process, we employ a mixed-initiative approach where both the agent and designers work on the same canvas to boost the collaborative design of storylines. We evaluate the reinforcement learning model through qualitative and quantitative experiments and demonstrate the usage of PlotThread using a collection of use cases.","PeriodicalId":13376,"journal":{"name":"IEEE Transactions on Visualization and Computer Graphics","volume":" ","pages":"294-303"},"PeriodicalIF":5.2,"publicationDate":"2021-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TVCG.2020.3030467","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38486690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 32
VisCode: Embedding Information in Visualization Images using Encoder-Decoder Network. 使用编码器-解码器网络在可视化图像中嵌入信息。
IF 5.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2021-02-01 Epub Date: 2021-01-28 DOI: 10.1109/TVCG.2020.3030343
Peiying Zhang, Chenhui Li, Changbo Wang

We present an approach called VisCode for embedding information into visualization images. This technology can implicitly embed data information specified by the user into a visualization while ensuring that the encoded visualization image is not distorted. The VisCode framework is based on a deep neural network. We propose to use visualization images and QR codes data as training data and design a robust deep encoder-decoder network. The designed model considers the salient features of visualization images to reduce the explicit visual loss caused by encoding. To further support large-scale encoding and decoding, we consider the characteristics of information visualization and propose a saliency-based QR code layout algorithm. We present a variety of practical applications of VisCode in the context of information visualization and conduct a comprehensive evaluation of the perceptual quality of encoding, decoding success rate, anti-attack capability, time performance, etc. The evaluation results demonstrate the effectiveness of VisCode.

我们提出了一种称为VisCode的方法,用于将信息嵌入到可视化图像中。该技术可以隐式地将用户指定的数据信息嵌入到可视化中,同时保证编码后的可视化图像不失真。VisCode框架是基于深度神经网络的。我们建议使用可视化图像和QR码数据作为训练数据,设计一个鲁棒的深度编码器-解码器网络。设计的模型考虑了可视化图像的显著特征,减少了编码造成的显式视觉损失。为了进一步支持大规模的编码和解码,我们考虑了信息可视化的特点,提出了一种基于显著性的QR码布局算法。我们介绍了VisCode在信息可视化背景下的各种实际应用,并对编码感知质量、解码成功率、抗攻击能力、时间性能等进行了综合评价。评价结果证明了VisCode的有效性。
{"title":"VisCode: Embedding Information in Visualization Images using Encoder-Decoder Network.","authors":"Peiying Zhang,&nbsp;Chenhui Li,&nbsp;Changbo Wang","doi":"10.1109/TVCG.2020.3030343","DOIUrl":"https://doi.org/10.1109/TVCG.2020.3030343","url":null,"abstract":"<p><p>We present an approach called VisCode for embedding information into visualization images. This technology can implicitly embed data information specified by the user into a visualization while ensuring that the encoded visualization image is not distorted. The VisCode framework is based on a deep neural network. We propose to use visualization images and QR codes data as training data and design a robust deep encoder-decoder network. The designed model considers the salient features of visualization images to reduce the explicit visual loss caused by encoding. To further support large-scale encoding and decoding, we consider the characteristics of information visualization and propose a saliency-based QR code layout algorithm. We present a variety of practical applications of VisCode in the context of information visualization and conduct a comprehensive evaluation of the perceptual quality of encoding, decoding success rate, anti-attack capability, time performance, etc. The evaluation results demonstrate the effectiveness of VisCode.</p>","PeriodicalId":13376,"journal":{"name":"IEEE Transactions on Visualization and Computer Graphics","volume":" ","pages":"326-336"},"PeriodicalIF":5.2,"publicationDate":"2021-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TVCG.2020.3030343","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38584094","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 26
Visual Reasoning Strategies for Effect Size Judgments and Decisions. 效应大小判断和决策的视觉推理策略。
IF 5.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2021-02-01 Epub Date: 2021-01-28 DOI: 10.1109/TVCG.2020.3030335
Alex Kale, Matthew Kay, Jessica Hullman

Uncertainty visualizations often emphasize point estimates to support magnitude estimates or decisions through visual comparison. However, when design choices emphasize means, users may overlook uncertainty information and misinterpret visual distance as a proxy for effect size. We present findings from a mixed design experiment on Mechanical Turk which tests eight uncertainty visualization designs: 95% containment intervals, hypothetical outcome plots, densities, and quantile dotplots, each with and without means added. We find that adding means to uncertainty visualizations has small biasing effects on both magnitude estimation and decision-making, consistent with discounting uncertainty. We also see that visualization designs that support the least biased effect size estimation do not support the best decision-making, suggesting that a chart user's sense of effect size may not necessarily be identical when they use the same information for different tasks. In a qualitative analysis of users' strategy descriptions, we find that many users switch strategies and do not employ an optimal strategy when one exists. Uncertainty visualizations which are optimally designed in theory may not be the most effective in practice because of the ways that users satisfice with heuristics, suggesting opportunities to better understand visualization effectiveness by modeling sets of potential strategies.

不确定性可视化通常强调点估计,以支持通过视觉比较的大小估计或决策。然而,当设计选择强调手段时,用户可能会忽略不确定性信息,并将视觉距离误解为效应大小的代表。我们在Mechanical Turk上展示了一项混合设计实验的结果,该实验测试了八种不确定性可视化设计:95%包含区间、假设结果图、密度和分位数点图,每种都有或没有添加手段。我们发现,在不确定性可视化中添加手段对量级估计和决策都有很小的偏置影响,与贴现不确定性一致。我们还看到,支持最小偏差效应大小估计的可视化设计并不支持最佳决策,这表明当图表用户在不同任务中使用相同的信息时,他们对效应大小的感觉可能不一定相同。在对用户策略描述的定性分析中,我们发现许多用户会切换策略,并且在存在最优策略时不采用最优策略。理论上优化设计的不确定性可视化在实践中可能不是最有效的,因为用户满意启发式的方式,建议通过对潜在策略集建模来更好地理解可视化效果。
{"title":"Visual Reasoning Strategies for Effect Size Judgments and Decisions.","authors":"Alex Kale,&nbsp;Matthew Kay,&nbsp;Jessica Hullman","doi":"10.1109/TVCG.2020.3030335","DOIUrl":"https://doi.org/10.1109/TVCG.2020.3030335","url":null,"abstract":"<p><p>Uncertainty visualizations often emphasize point estimates to support magnitude estimates or decisions through visual comparison. However, when design choices emphasize means, users may overlook uncertainty information and misinterpret visual distance as a proxy for effect size. We present findings from a mixed design experiment on Mechanical Turk which tests eight uncertainty visualization designs: 95% containment intervals, hypothetical outcome plots, densities, and quantile dotplots, each with and without means added. We find that adding means to uncertainty visualizations has small biasing effects on both magnitude estimation and decision-making, consistent with discounting uncertainty. We also see that visualization designs that support the least biased effect size estimation do not support the best decision-making, suggesting that a chart user's sense of effect size may not necessarily be identical when they use the same information for different tasks. In a qualitative analysis of users' strategy descriptions, we find that many users switch strategies and do not employ an optimal strategy when one exists. Uncertainty visualizations which are optimally designed in theory may not be the most effective in practice because of the ways that users satisfice with heuristics, suggesting opportunities to better understand visualization effectiveness by modeling sets of potential strategies.</p>","PeriodicalId":13376,"journal":{"name":"IEEE Transactions on Visualization and Computer Graphics","volume":" ","pages":"272-282"},"PeriodicalIF":5.2,"publicationDate":"2021-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TVCG.2020.3030335","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38486679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 52
Info Vis Reviewers Info-Vis审查员
IF 5.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2021-02-01 DOI: 10.1109/tvcg.2020.3033652
{"title":"Info Vis Reviewers","authors":"","doi":"10.1109/tvcg.2020.3033652","DOIUrl":"https://doi.org/10.1109/tvcg.2020.3033652","url":null,"abstract":"","PeriodicalId":13376,"journal":{"name":"IEEE Transactions on Visualization and Computer Graphics","volume":" ","pages":""},"PeriodicalIF":5.2,"publicationDate":"2021-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45990745","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
VATLD: A Visual Analytics System to Assess, Understand and Improve Traffic Light Detection. VATLD:一个评估、理解和改进交通灯检测的可视化分析系统。
IF 5.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2021-02-01 Epub Date: 2021-01-28 DOI: 10.1109/TVCG.2020.3030350
Liang Gou, Lincan Zou, Nanxiang Li, Michael Hofmann, Arvind Kumar Shekar, Axel Wendt, Liu Ren

Traffic light detection is crucial for environment perception and decision-making in autonomous driving. State-of-the-art detectors are built upon deep Convolutional Neural Networks (CNNs) and have exhibited promising performance. However, one looming concern with CNN based detectors is how to thoroughly evaluate the performance of accuracy and robustness before they can be deployed to autonomous vehicles. In this work, we propose a visual analytics system, VATLD, equipped with a disentangled representation learning and semantic adversarial learning, to assess, understand, and improve the accuracy and robustness of traffic light detectors in autonomous driving applications. The disentangled representation learning extracts data semantics to augment human cognition with human-friendly visual summarization, and the semantic adversarial learning efficiently exposes interpretable robustness risks and enables minimal human interaction for actionable insights. We also demonstrate the effectiveness of various performance improvement strategies derived from actionable insights with our visual analytics system, VATLD, and illustrate some practical implications for safety-critical applications in autonomous driving.

交通信号灯检测是自动驾驶环境感知和决策的关键。最先进的检测器建立在深度卷积神经网络(cnn)的基础上,并表现出了良好的性能。然而,基于CNN的检测器的一个迫在眉睫的问题是,在将其部署到自动驾驶汽车之前,如何彻底评估其准确性和鲁棒性的表现。在这项工作中,我们提出了一个视觉分析系统VATLD,配备了解纠缠表示学习和语义对抗学习,以评估、理解和提高自动驾驶应用中交通灯检测器的准确性和鲁棒性。解纠缠表示学习提取数据语义,通过人性化的视觉总结增强人类认知,语义对抗性学习有效地暴露了可解释的鲁棒性风险,并为可操作的见解提供了最小的人类交互。我们还展示了各种性能改进策略的有效性,这些策略来源于我们的视觉分析系统VATLD的可操作见解,并说明了自动驾驶中安全关键应用的一些实际意义。
{"title":"VATLD: A Visual Analytics System to Assess, Understand and Improve Traffic Light Detection.","authors":"Liang Gou,&nbsp;Lincan Zou,&nbsp;Nanxiang Li,&nbsp;Michael Hofmann,&nbsp;Arvind Kumar Shekar,&nbsp;Axel Wendt,&nbsp;Liu Ren","doi":"10.1109/TVCG.2020.3030350","DOIUrl":"https://doi.org/10.1109/TVCG.2020.3030350","url":null,"abstract":"<p><p>Traffic light detection is crucial for environment perception and decision-making in autonomous driving. State-of-the-art detectors are built upon deep Convolutional Neural Networks (CNNs) and have exhibited promising performance. However, one looming concern with CNN based detectors is how to thoroughly evaluate the performance of accuracy and robustness before they can be deployed to autonomous vehicles. In this work, we propose a visual analytics system, VATLD, equipped with a disentangled representation learning and semantic adversarial learning, to assess, understand, and improve the accuracy and robustness of traffic light detectors in autonomous driving applications. The disentangled representation learning extracts data semantics to augment human cognition with human-friendly visual summarization, and the semantic adversarial learning efficiently exposes interpretable robustness risks and enables minimal human interaction for actionable insights. We also demonstrate the effectiveness of various performance improvement strategies derived from actionable insights with our visual analytics system, VATLD, and illustrate some practical implications for safety-critical applications in autonomous driving.</p>","PeriodicalId":13376,"journal":{"name":"IEEE Transactions on Visualization and Computer Graphics","volume":" ","pages":"261-271"},"PeriodicalIF":5.2,"publicationDate":"2021-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TVCG.2020.3030350","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38507930","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 33
StructGraphics: Flexible Visualization Design through Data-Agnostic and Reusable Graphical Structures. StructGraphics:通过数据不可知和可重用的图形结构进行灵活的可视化设计。
IF 5.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2021-02-01 Epub Date: 2021-01-28 DOI: 10.1109/TVCG.2020.3030476
Theophanis Tsandilas

Information visualization research has developed powerful systems that enable users to author custom data visualizations without textual programming. These systems can support graphics-driven practices by bridging lazy data-binding mechanisms with vector-graphics editing tools. Yet, despite their expressive power, visualization authoring systems often assume that users want to generate visual representations that they already have in mind rather than explore designs. They also impose a data-to-graphics workflow, where binding data dimensions to graphical properties is a necessary step for generating visualization layouts. In this paper, we introduce StructGraphics, an approach for creating data-agnostic and fully reusable visualization designs. StructGraphics enables designers to construct visualization designs by drawing graphics on a canvas and then structuring their visual properties without relying on a concrete dataset or data schema. In StructGraphics, tabular data structures are derived directly from the structure of the graphics. Later, designers can link these structures with real datasets through a spreadsheet user interface. StructGraphics supports the design and reuse of complex data visualizations by combining graphical property sharing, by-example design specification, and persistent layout constraints. We demonstrate the power of the approach through a gallery of visualization examples and reflect on its strengths and limitations in interaction with graphic designers and data visualization experts.

信息可视化研究已经开发出功能强大的系统,使用户无需文本编程即可编写自定义数据可视化。这些系统可以通过将惰性数据绑定机制与矢量图形编辑工具连接起来,从而支持图形驱动的实践。然而,尽管可视化创作系统具有强大的表现力,但它们通常假设用户想要生成他们已经想到的视觉表示,而不是探索设计。它们还强加了一个数据到图形的工作流,在这个工作流中,将数据维度绑定到图形属性是生成可视化布局的必要步骤。在本文中,我们介绍了StructGraphics,一种用于创建与数据无关且完全可重用的可视化设计的方法。StructGraphics使设计人员能够通过在画布上绘制图形,然后构建其视觉属性来构建可视化设计,而无需依赖于具体的数据集或数据模式。在StructGraphics中,表格数据结构直接派生自图形的结构。之后,设计师可以通过电子表格用户界面将这些结构与真实数据集联系起来。StructGraphics通过结合图形属性共享、实例设计规范和持久布局约束,支持复杂数据可视化的设计和重用。我们通过一系列可视化示例展示了该方法的强大功能,并反映了它在与图形设计师和数据可视化专家互动时的优势和局限性。
{"title":"StructGraphics: Flexible Visualization Design through Data-Agnostic and Reusable Graphical Structures.","authors":"Theophanis Tsandilas","doi":"10.1109/TVCG.2020.3030476","DOIUrl":"https://doi.org/10.1109/TVCG.2020.3030476","url":null,"abstract":"<p><p>Information visualization research has developed powerful systems that enable users to author custom data visualizations without textual programming. These systems can support graphics-driven practices by bridging lazy data-binding mechanisms with vector-graphics editing tools. Yet, despite their expressive power, visualization authoring systems often assume that users want to generate visual representations that they already have in mind rather than explore designs. They also impose a data-to-graphics workflow, where binding data dimensions to graphical properties is a necessary step for generating visualization layouts. In this paper, we introduce StructGraphics, an approach for creating data-agnostic and fully reusable visualization designs. StructGraphics enables designers to construct visualization designs by drawing graphics on a canvas and then structuring their visual properties without relying on a concrete dataset or data schema. In StructGraphics, tabular data structures are derived directly from the structure of the graphics. Later, designers can link these structures with real datasets through a spreadsheet user interface. StructGraphics supports the design and reuse of complex data visualizations by combining graphical property sharing, by-example design specification, and persistent layout constraints. We demonstrate the power of the approach through a gallery of visualization examples and reflect on its strengths and limitations in interaction with graphic designers and data visualization experts.</p>","PeriodicalId":13376,"journal":{"name":"IEEE Transactions on Visualization and Computer Graphics","volume":" ","pages":"315-325"},"PeriodicalIF":5.2,"publicationDate":"2021-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TVCG.2020.3030476","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38584106","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Chartem: Reviving Chart Images with Data Embedding. 图表:恢复图表图像与数据嵌入。
IF 5.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2021-02-01 Epub Date: 2021-01-28 DOI: 10.1109/TVCG.2020.3030351
Jiayun Fu, Bin Zhu, Weiwei Cui, Song Ge, Yun Wang, Haidong Zhang, He Huang, Yuanyuan Tang, Dongmei Zhang, Xiaojing Ma

In practice, charts are widely stored as bitmap images. Although easily consumed by humans, they are not convenient for other uses. For example, changing the chart style or type or a data value in a chart image practically requires creating a completely new chart, which is often a time-consuming and error-prone process. To assist these tasks, many approaches have been proposed to automatically extract information from chart images with computer vision and machine learning techniques. Although they have achieved promising preliminary results, there are still a lot of challenges to overcome in terms of robustness and accuracy. In this paper, we propose a novel alternative approach called Chartem to address this issue directly from the root. Specifically, we design a data-embedding schema to encode a significant amount of information into the background of a chart image without interfering human perception of the chart. The embedded information, when extracted from the image, can enable a variety of visualization applications to reuse or repurpose chart images. To evaluate the effectiveness of Chartem, we conduct a user study and performance experiments on Chartem embedding and extraction algorithms. We further present several prototype applications to demonstrate the utility of Chartem.

在实践中,图表被广泛地存储为位图图像。虽然它们很容易被人类食用,但不方便用于其他用途。例如,更改图表样式或类型或图表图像中的数据值实际上需要创建一个全新的图表,这通常是一个耗时且容易出错的过程。为了帮助完成这些任务,人们提出了许多方法,利用计算机视觉和机器学习技术从图表图像中自动提取信息。虽然他们已经取得了有希望的初步结果,但在鲁棒性和准确性方面仍有许多挑战需要克服。在本文中,我们提出了一种称为Chartem的新颖替代方法来直接从根源上解决这个问题。具体来说,我们设计了一个数据嵌入模式,在不干扰人类对图表的感知的情况下,将大量信息编码到图表图像的背景中。当从图像中提取嵌入的信息时,可以使各种可视化应用程序重用或重新利用图表图像。为了评估Chartem的有效性,我们对Chartem嵌入和提取算法进行了用户研究和性能实验。我们进一步提出了几个原型应用程序来演示Chartem的实用性。
{"title":"Chartem: Reviving Chart Images with Data Embedding.","authors":"Jiayun Fu,&nbsp;Bin Zhu,&nbsp;Weiwei Cui,&nbsp;Song Ge,&nbsp;Yun Wang,&nbsp;Haidong Zhang,&nbsp;He Huang,&nbsp;Yuanyuan Tang,&nbsp;Dongmei Zhang,&nbsp;Xiaojing Ma","doi":"10.1109/TVCG.2020.3030351","DOIUrl":"https://doi.org/10.1109/TVCG.2020.3030351","url":null,"abstract":"<p><p>In practice, charts are widely stored as bitmap images. Although easily consumed by humans, they are not convenient for other uses. For example, changing the chart style or type or a data value in a chart image practically requires creating a completely new chart, which is often a time-consuming and error-prone process. To assist these tasks, many approaches have been proposed to automatically extract information from chart images with computer vision and machine learning techniques. Although they have achieved promising preliminary results, there are still a lot of challenges to overcome in terms of robustness and accuracy. In this paper, we propose a novel alternative approach called Chartem to address this issue directly from the root. Specifically, we design a data-embedding schema to encode a significant amount of information into the background of a chart image without interfering human perception of the chart. The embedded information, when extracted from the image, can enable a variety of visualization applications to reuse or repurpose chart images. To evaluate the effectiveness of Chartem, we conduct a user study and performance experiments on Chartem embedding and extraction algorithms. We further present several prototype applications to demonstrate the utility of Chartem.</p>","PeriodicalId":13376,"journal":{"name":"IEEE Transactions on Visualization and Computer Graphics","volume":" ","pages":"337-346"},"PeriodicalIF":5.2,"publicationDate":"2021-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TVCG.2020.3030351","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38369646","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
期刊
IEEE Transactions on Visualization and Computer Graphics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1