首页 > 最新文献

Graphics and Visual Computing最新文献

英文 中文
FaceShapeGene: A disentangled shape representation for flexible face image editing FaceShapeGene:用于灵活人脸图像编辑的解纠缠形状表示
Pub Date : 2021-06-01 DOI: 10.1016/j.gvc.2021.200023
Sen-Zhe Xu , Hao-Zhi Huang , Fang-Lue Zhang , Song-Hai Zhang

How do I look if I have the same nose shape as my favorite star? Existing methods for face image manipulation generally focus on modifying predefined facial attributes, editing expressions and changing image styles, where users cannot control the shapes of specific semantic facial parts freely in the generated face image. The facial part shapes are described by their geometries and need to be controlled by continuously generating geometric parameters. Therefore, the existing methods that work with discretely labelled attributes are not applicable on this task. In this paper, we propose a novel approach to learn the disentangled shape representation for a face image, namely the FaceShapeGene, which encodes the shape information of the semantic facial parts into separate chunks in the latent space. It allows users to freely recombine the part-wise latent chunks of a face image from other individuals to transfer a specified facial part shape, just like gene editing. Experimental results on several tasks demonstrate that the proposed FaceShapeGene representation correctly disentangles the shape features of different semantic parts. Comparisons to existing methods show the superiority of the proposed method on facial parts editing tasks.

如果我的鼻子形状和我最喜欢的明星一样,我看起来会怎么样?现有的人脸图像处理方法一般侧重于修改预定义的人脸属性、编辑表情和改变图像样式,用户无法在生成的人脸图像中自由控制特定语义面部部位的形状。面部部位的形状由其几何形状来描述,需要通过不断生成几何参数来控制。因此,处理离散标记属性的现有方法不适用于此任务。在本文中,我们提出了一种新的方法来学习人脸图像的解纠缠形状表示,即FaceShapeGene,该方法将语义面部部位的形状信息编码成潜在空间中的独立块。它允许用户自由地重新组合来自其他人的面部图像的部分潜在块,以转移特定的面部部位形状,就像基因编辑一样。几个任务的实验结果表明,所提出的FaceShapeGene表示正确地分离了不同语义部分的形状特征。通过与现有方法的比较,表明了该方法在面部部分编辑任务中的优越性。
{"title":"FaceShapeGene: A disentangled shape representation for flexible face image editing","authors":"Sen-Zhe Xu ,&nbsp;Hao-Zhi Huang ,&nbsp;Fang-Lue Zhang ,&nbsp;Song-Hai Zhang","doi":"10.1016/j.gvc.2021.200023","DOIUrl":"10.1016/j.gvc.2021.200023","url":null,"abstract":"<div><p>How do I look if I have the same nose shape as my favorite star? Existing methods for face image manipulation generally focus on modifying predefined facial attributes, editing expressions and changing image styles, where users cannot control the shapes of specific semantic facial parts freely in the generated face image. The facial part shapes are described by their geometries and need to be controlled by continuously generating geometric parameters. Therefore, the existing methods that work with discretely labelled attributes are not applicable on this task. In this paper, we propose a novel approach to learn the disentangled shape representation for a face image, namely the <em>FaceShapeGene</em>, which encodes the shape information of the semantic facial parts into separate chunks in the latent space. It allows users to freely recombine the part-wise latent chunks of a face image from other individuals to transfer a specified facial part shape, just like gene editing. Experimental results on several tasks demonstrate that the proposed <em>FaceShapeGene</em> representation correctly disentangles the shape features of different semantic parts. Comparisons to existing methods show the superiority of the proposed method on facial parts editing tasks.</p></div>","PeriodicalId":100592,"journal":{"name":"Graphics and Visual Computing","volume":"4 ","pages":"Article 200023"},"PeriodicalIF":0.0,"publicationDate":"2021-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.gvc.2021.200023","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132657572","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Real-time physically plausible simulation of forest 森林的实时物理模拟
Pub Date : 2021-06-01 DOI: 10.1016/j.gvc.2021.200025
Zhengze Li, Fen Kuang, Yanci Zhang

In this paper, we propose a lookup table method to simulate a physically plausible large-scale forest animation. Our method is rooted in the FEM simulator to simulate plant deformation, and establish a table through pre-computed dynamic equation data. Based on this table, an efficient physical state similarity algorithm for traversing the table is proposed. Real-time search for the vertices displacement of the model from the table, and constantly update the current gesture of the model. The forest can react to the environment at an interactive rate by calculating the effects of random wind. Experiments show that our method can simulate a single tree with high efficiency, and can simulate a physically plausible forest with 500 trees at a frame rate of 34(fps).

在本文中,我们提出了一种查找表方法来模拟物理上似是而非的大尺度森林动画。本文的方法是基于有限元模拟器模拟植物变形,并通过预先计算的动态方程数据建立一个表格。在此基础上,提出了一种高效的遍历物理状态相似算法。实时从表中搜索模型的顶点位移,不断更新模型的当前姿态。通过计算随机风的影响,森林可以以交互速率对环境做出反应。实验表明,该方法可以高效地模拟单棵树,并能以34帧/秒的帧速率模拟500棵树组成的物理上合理的森林。
{"title":"Real-time physically plausible simulation of forest","authors":"Zhengze Li,&nbsp;Fen Kuang,&nbsp;Yanci Zhang","doi":"10.1016/j.gvc.2021.200025","DOIUrl":"10.1016/j.gvc.2021.200025","url":null,"abstract":"<div><p>In this paper, we propose a lookup table method to simulate a physically plausible large-scale forest animation. Our method is rooted in the FEM simulator to simulate plant deformation, and establish a table through pre-computed dynamic equation data. Based on this table, an efficient physical state similarity algorithm for traversing the table is proposed. Real-time search for the vertices displacement of the model from the table, and constantly update the current gesture of the model. The forest can react to the environment at an interactive rate by calculating the effects of random wind. Experiments show that our method can simulate a single tree with high efficiency, and can simulate a physically plausible forest with 500 trees at a frame rate of 34(fps).</p></div>","PeriodicalId":100592,"journal":{"name":"Graphics and Visual Computing","volume":"4 ","pages":"Article 200025"},"PeriodicalIF":0.0,"publicationDate":"2021-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.gvc.2021.200025","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121959897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Virtual reality game level layout design for real environment constraints 基于真实环境约束的虚拟现实游戏关卡布局设计
Pub Date : 2021-06-01 DOI: 10.1016/j.gvc.2021.200020
Huimin Liu, Zhiquan Wang, Angshuman Mazumdar, Christos Mousas

This paper presents an optimization-based approach for designing virtual reality game level layouts, based on the layout of a real environment. Our method starts by asking the user to define the shape of the real environment and the obstacles (e.g., furniture) located in it. Then, by representing a game level as an assembly of chunks and defining the game level layout design decisions in cost terms (mapping, fitting, variations, and accessibility) in a total cost function, our system automatically synthesizes a game level layout that fulfills the real environment layout and its constraints as well as the user-defined design decisions. To evaluate the proposed method, a user study was conducted. The results indicated that the proposed method: (1) enhanced the levels of presence; (2) enhanced the levels of involvement of participants in the virtual environment; and (3) reduced the fear of collision with the real environment and its constraints. Limitations and future research directions are also discussed.

本文提出了一种基于现实环境布局的虚拟现实游戏关卡布局优化设计方法。我们的方法首先要求用户定义真实环境的形状和其中的障碍物(例如家具)。然后,通过将游戏关卡表示为块的集合,并在总成本函数中根据成本(映射、拟合、变化和可访问性)定义游戏关卡布局设计决策,我们的系统将自动合成满足真实环境布局及其约束以及用户定义设计决策的游戏关卡布局。为了评估提出的方法,进行了一项用户研究。结果表明:(1)提高了存在度;(2)提高了参与者对虚拟环境的参与程度;(3)减少了对与真实环境及其约束发生碰撞的恐惧。并对研究的局限性和未来的研究方向进行了讨论。
{"title":"Virtual reality game level layout design for real environment constraints","authors":"Huimin Liu,&nbsp;Zhiquan Wang,&nbsp;Angshuman Mazumdar,&nbsp;Christos Mousas","doi":"10.1016/j.gvc.2021.200020","DOIUrl":"10.1016/j.gvc.2021.200020","url":null,"abstract":"<div><p>This paper presents an optimization-based approach for designing virtual reality game level layouts, based on the layout of a real environment. Our method starts by asking the user to define the shape of the real environment and the obstacles (e.g., furniture) located in it. Then, by representing a game level as an assembly of chunks and defining the game level layout design decisions in cost terms (mapping, fitting, variations, and accessibility) in a total cost function, our system automatically synthesizes a game level layout that fulfills the real environment layout and its constraints as well as the user-defined design decisions. To evaluate the proposed method, a user study was conducted. The results indicated that the proposed method: (1) enhanced the levels of presence; (2) enhanced the levels of involvement of participants in the virtual environment; and (3) reduced the fear of collision with the real environment and its constraints. Limitations and future research directions are also discussed.</p></div>","PeriodicalId":100592,"journal":{"name":"Graphics and Visual Computing","volume":"4 ","pages":"Article 200020"},"PeriodicalIF":0.0,"publicationDate":"2021-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.gvc.2021.200020","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115489710","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
ERDSE: efficient reinforcement learning based design space exploration method for CNN accelerator on resource limited platform ERDSE:资源有限平台上基于高效强化学习的CNN加速器设计空间探索方法
Pub Date : 2021-06-01 DOI: 10.1016/j.gvc.2021.200024
Kaijie Feng, Xiaoya Fan, Jianfeng An, Xiping Wang, Kaiyue Di, Jiangfei Li, Minghao Lu, Chuxi Li

Convolutional Neural Network (CNN) accelerator design on resource limited platform faces the challenge of lacking efficient design space exploration (DSE) method because of its huge and irregular design space. Numerous parameters belong to accelerator architecture and dataflow mode jointly construct a huge design space while power and resource constrains make the design space become quite irregular. Under such circumstances, traditional DSE methods based on exhaustive search is infeasible for the non-trivial design space and methods based on general optimization algorithms will also be inefficient because of the irregular distribution of design points. In this paper, we provide an efficient DSE method named ERDSE for CNN accelerator design on resource limited platform. ERDSE is based on reinforcement learning algorithm REINFORCE but refines it to adapt the complex design space. ERDSE implements off-policy strategy to decouple sampling and learning phase, then separately refines them to further improve exploration ability and samples utilization. We implement ERDSE to optimize the computing latency of CNN accelerator for VGG-16 and MobileNet-V3. Under the tightest constraints, ERDSE achieves 1.2x-1.7x (on VGG-16) and 2.3-4.9x (on MobileNet-V3) latency improvement compared with other DSE methods, which demonstrates the efficiency of ERDSE.

在资源有限的平台上,卷积神经网络(CNN)加速器设计由于其巨大且不规则的设计空间而面临缺乏有效的设计空间探索(DSE)方法的挑战。加速器架构和数据流模式下的众多参数共同构成了巨大的设计空间,而功率和资源的限制使得设计空间变得非常不规则。在这种情况下,传统的基于穷举搜索的DSE方法对于非平凡设计空间是不可行的,而基于一般优化算法的方法也会因为设计点的不规则分布而效率低下。在资源有限的平台上,为CNN加速器设计提供了一种高效的DSE方法——ERDSE。ERDSE是在强化学习算法的基础上对其进行强化和细化,以适应复杂的设计空间。ERDSE采用脱策略策略将采样和学习阶段解耦,然后分别对其进行细化,进一步提高探测能力和样本利用率。为了优化VGG-16和MobileNet-V3的CNN加速器的计算延迟,我们实现了ERDSE。在最严格的约束条件下,与其他DSE方法相比,ERDSE的延迟提高了1.2 -1.7倍(在VGG-16上),2.3-4.9倍(在MobileNet-V3上),这表明了ERDSE的效率。
{"title":"ERDSE: efficient reinforcement learning based design space exploration method for CNN accelerator on resource limited platform","authors":"Kaijie Feng,&nbsp;Xiaoya Fan,&nbsp;Jianfeng An,&nbsp;Xiping Wang,&nbsp;Kaiyue Di,&nbsp;Jiangfei Li,&nbsp;Minghao Lu,&nbsp;Chuxi Li","doi":"10.1016/j.gvc.2021.200024","DOIUrl":"10.1016/j.gvc.2021.200024","url":null,"abstract":"<div><p>Convolutional Neural Network (CNN) accelerator design on resource limited platform faces the challenge of lacking efficient design space exploration (DSE) method because of its huge and irregular design space. Numerous parameters belong to accelerator architecture and dataflow mode jointly construct a huge design space while power and resource constrains make the design space become quite irregular. Under such circumstances, traditional DSE methods based on exhaustive search is infeasible for the non-trivial design space and methods based on general optimization algorithms will also be inefficient because of the irregular distribution of design points. In this paper, we provide an efficient DSE method named ERDSE for CNN accelerator design on resource limited platform. ERDSE is based on reinforcement learning algorithm REINFORCE but refines it to adapt the complex design space. ERDSE implements off-policy strategy to decouple sampling and learning phase, then separately refines them to further improve exploration ability and samples utilization. We implement ERDSE to optimize the computing latency of CNN accelerator for VGG-16 and MobileNet-V3. Under the tightest constraints, ERDSE achieves 1.2x-1.7x (on VGG-16) and 2.3-4.9x (on MobileNet-V3) latency improvement compared with other DSE methods, which demonstrates the efficiency of ERDSE.</p></div>","PeriodicalId":100592,"journal":{"name":"Graphics and Visual Computing","volume":"4 ","pages":"Article 200024"},"PeriodicalIF":0.0,"publicationDate":"2021-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.gvc.2021.200024","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123567865","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Computers and Graphics–Foreword to the Special Section on CAD/Graphics 2021 计算机和图形- CAD/图形特别部分前言2021
Pub Date : 2021-06-01 DOI: 10.1016/j.gvc.2021.200027
{"title":"Computers and Graphics–Foreword to the Special Section on CAD/Graphics 2021","authors":"","doi":"10.1016/j.gvc.2021.200027","DOIUrl":"10.1016/j.gvc.2021.200027","url":null,"abstract":"","PeriodicalId":100592,"journal":{"name":"Graphics and Visual Computing","volume":"4 ","pages":"Article 200027"},"PeriodicalIF":0.0,"publicationDate":"2021-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.gvc.2021.200027","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91507700","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Experiencing GPU path tracing in online courses 体验GPU路径追踪在线课程
Pub Date : 2021-06-01 DOI: 10.1016/j.gvc.2021.200022
Masaru Ohkawara , Hideo Saito , Issei Fujishiro

In consideration of interdependency between image sensing/recognition (computer vision, CV) and 3D image synthesis (computer graphics, CG) in visual computing, Keio University, Department of Information and Computer Science, reorganized its first course on CV and CG in the undergraduate program into a series of three courses on visual computing in the 2019 academic year. One salient feature of these courses is its newly introduced programming assignment with two specific goals: to experience GPU computing and to understand the path tracing algorithm. The purpose is to help students easily understand the trend in visual computing and vividly envision the future CG area. Specifically, two types of tasks were given to students: material design and analysis of the relationship between sampling and noise. The educational material builds on Google Colaboratory, a cloud-based development environment, to be independent of the students’ hardware. Owing to the judicious design, students can unhesitatingly work on the programming assignment with relatively inexpensive hardware, such as laptop PCs, tablets, or even smartphones, whenever they have a standard off-campus network environment. In the second academic year (2020) of these courses, this type of exercise was especially valuable for students who had to take the courses online because of the COVID-19 pandemic. The effects of the new courses with the educational materials were empirically proven in terms of quantitative and qualitative perspectives.

考虑到视觉计算中图像感知/识别(计算机视觉,CV)和3D图像合成(计算机图形学,CG)之间的相互依存关系,庆应义塾大学信息与计算机科学系在2019学年将其本科课程的第一门CV和CG课程重组为视觉计算系列三门课程。这些课程的一个显著特点是其新引入的编程作业有两个具体目标:体验GPU计算和理解路径跟踪算法。目的是帮助学生轻松了解视觉计算的趋势,并生动地设想未来的CG领域。具体来说,我们给学生布置了两类任务:材料设计和分析采样与噪声之间的关系。教育材料建立在谷歌协作,一个基于云的开发环境,是独立于学生的硬件。由于设计的明智,学生可以毫不犹豫地使用相对便宜的硬件,如笔记本电脑,平板电脑,甚至智能手机,只要他们有一个标准的校外网络环境进行编程作业。在这些课程的第二学年(2020年),这种练习对那些因COVID-19大流行而不得不在线学习课程的学生尤其有价值。从定量和定性两方面对教材新课程的效果进行了实证验证。
{"title":"Experiencing GPU path tracing in online courses","authors":"Masaru Ohkawara ,&nbsp;Hideo Saito ,&nbsp;Issei Fujishiro","doi":"10.1016/j.gvc.2021.200022","DOIUrl":"10.1016/j.gvc.2021.200022","url":null,"abstract":"<div><p>In consideration of interdependency between image sensing/recognition (computer vision, CV) and 3D image synthesis (computer graphics, CG) in visual computing, Keio University, Department of Information and Computer Science, reorganized its first course on CV and CG in the undergraduate program into a series of three courses on visual computing in the 2019 academic year. One salient feature of these courses is its newly introduced programming assignment with two specific goals: to experience GPU computing and to understand the path tracing algorithm. The purpose is to help students easily understand the trend in visual computing and vividly envision the future CG area. Specifically, two types of tasks were given to students: material design and analysis of the relationship between sampling and noise. The educational material builds on Google Colaboratory, a cloud-based development environment, to be independent of the students’ hardware. Owing to the judicious design, students can unhesitatingly work on the programming assignment with relatively inexpensive hardware, such as laptop PCs, tablets, or even smartphones, whenever they have a standard off-campus network environment. In the second academic year (2020) of these courses, this type of exercise was especially valuable for students who had to take the courses online because of the COVID-19 pandemic. The effects of the new courses with the educational materials were empirically proven in terms of quantitative and qualitative perspectives.</p></div>","PeriodicalId":100592,"journal":{"name":"Graphics and Visual Computing","volume":"4 ","pages":"Article 200022"},"PeriodicalIF":0.0,"publicationDate":"2021-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.gvc.2021.200022","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115386135","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
LatentMap: Effective auto-encoding of density maps for spatiotemporal data visualizations LatentMap:用于时空数据可视化的密度图的有效自动编码
Pub Date : 2021-06-01 DOI: 10.1016/j.gvc.2021.200019
Shiqi Jiang , Chenhui Li , Lei Wang , Yanpeng Hu , Changbo Wang

In the study of spatiotemporal data visualization, compression and morphing of density maps are challenging tasks. Many existing methods require adjustment of multiple parameters and rich experience, but still cannot get accuracy or smooth results. In this paper, we propose a GAN-based method (LatentMap) to explore the latent space of density maps, which is an end-to-end method. First, we find that small latent codes can be used as the results of compression, which can greatly save transmission time in a front-end system. We collect density maps to make a dataset. Our model learns from the dataset and samples from the Gaussian distribution to encode and decode density maps. Second, based on the latent codes, we explore the smooth dynamic visualization of density maps, and our method can generate dynamic and smooth results. We show the results of our method in a variety of situations and evaluations from multiple aspects. The results demonstrate the effectiveness and practicality of our approach. Our method has practical applications, such as speed up front-end loading, completing or predicting stream data information and visual query.

在时空数据可视化研究中,密度图的压缩和变形是一项具有挑战性的任务。现有的许多方法需要多个参数的调整和丰富的经验,但仍然无法获得准确或平滑的结果。在本文中,我们提出了一种基于GAN的方法(LatentMap)来探索密度图的潜在空间,这是一种端到端的方法。首先,我们发现小的潜在代码可以作为压缩的结果,这可以极大地节省前端系统中的传输时间。我们收集密度图来制作数据集。我们的模型从数据集和高斯分布样本中学习,以编码和解码密度图。其次,基于潜码,我们探索了密度图的平滑动态可视化,我们的方法可以产生动态和平滑的结果。我们展示了我们的方法在各种情况下的结果,并从多个方面进行了评估。结果证明了我们的方法的有效性和实用性。我们的方法具有实际应用,如加快前端加载、完成或预测流数据信息以及可视化查询。
{"title":"LatentMap: Effective auto-encoding of density maps for spatiotemporal data visualizations","authors":"Shiqi Jiang ,&nbsp;Chenhui Li ,&nbsp;Lei Wang ,&nbsp;Yanpeng Hu ,&nbsp;Changbo Wang","doi":"10.1016/j.gvc.2021.200019","DOIUrl":"https://doi.org/10.1016/j.gvc.2021.200019","url":null,"abstract":"<div><p>In the study of spatiotemporal data visualization, compression and morphing of density maps are challenging tasks. Many existing methods require adjustment of multiple parameters and rich experience, but still cannot get accuracy or smooth results. In this paper, we propose a GAN-based method (LatentMap) to explore the latent space of density maps, which is an end-to-end method. First, we find that small latent codes can be used as the results of compression, which can greatly save transmission time in a front-end system. We collect density maps to make a dataset. Our model learns from the dataset and samples from the Gaussian distribution to encode and decode density maps. Second, based on the latent codes, we explore the smooth dynamic visualization of density maps, and our method can generate dynamic and smooth results. We show the results of our method in a variety of situations and evaluations from multiple aspects. The results demonstrate the effectiveness and practicality of our approach. Our method has practical applications, such as speed up front-end loading, completing or predicting stream data information and visual query.</p></div>","PeriodicalId":100592,"journal":{"name":"Graphics and Visual Computing","volume":"4 ","pages":"Article 200019"},"PeriodicalIF":0.0,"publicationDate":"2021-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.gvc.2021.200019","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72283201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
CrossVis: A visual analytics system for exploring heterogeneous multivariate data with applications to materials and climate sciences CrossVis:一个可视化分析系统,用于探索异质多元数据,并应用于材料和气候科学
Pub Date : 2020-06-01 DOI: 10.1016/j.gvc.2020.200013
Chad A. Steed , John R. Goodall , Junghoon Chae , Artem Trofimov

We present a new visual analytics system, called CrossVis, that allows flexible exploration of multivariate data with heterogeneous data types. After presenting the design requirements, which were derived from prior collaborations with domain experts, we introduce key features of CrossVis beginning with a tabular data model that coordinates multiple linked views and performance enhancements that enable scalable exploration of complex data. Next, we introduce extensions to the parallel coordinates plot, which include new axis representations for numerical, temporal, categorical, and image data, an embedded bivariate axis option, dynamic selections, focus+context axis scaling, and graphical indicators of key statistical values. We demonstrate the practical effectiveness of CrossVis through two scientific use cases; one focused on understanding neural network image classifications from a genetic engineering project and another involving general exploration of a large and complex data set of historical hurricane observations. We conclude with discussions regarding domain expert feedback, future enhancements to address limitations, and the interdisciplinary process used to design CrossVis.

我们提出了一个新的可视化分析系统,称为CrossVis,它允许灵活地探索具有异构数据类型的多变量数据。在介绍了设计需求(这些需求来自于先前与领域专家的合作)之后,我们介绍了CrossVis的关键特性,首先是一个表格数据模型,它可以协调多个链接的视图,并增强了性能,可以对复杂数据进行可扩展的探索。接下来,我们介绍了对平行坐标图的扩展,其中包括数值、时间、分类和图像数据的新轴表示,嵌入式二元轴选项,动态选择,焦点+上下文轴缩放以及关键统计值的图形指示器。我们通过两个科学的用例证明了交叉svis的实际有效性;其中一项重点是理解基因工程项目中的神经网络图像分类,另一项涉及对历史飓风观测的大型复杂数据集的一般探索。最后,我们讨论了领域专家的反馈,解决局限性的未来增强,以及用于设计CrossVis的跨学科过程。
{"title":"CrossVis: A visual analytics system for exploring heterogeneous multivariate data with applications to materials and climate sciences","authors":"Chad A. Steed ,&nbsp;John R. Goodall ,&nbsp;Junghoon Chae ,&nbsp;Artem Trofimov","doi":"10.1016/j.gvc.2020.200013","DOIUrl":"https://doi.org/10.1016/j.gvc.2020.200013","url":null,"abstract":"<div><p>We present a new visual analytics system, called CrossVis, that allows flexible exploration of multivariate data with heterogeneous data types. After presenting the design requirements, which were derived from prior collaborations with domain experts, we introduce key features of CrossVis beginning with a tabular data model that coordinates multiple linked views and performance enhancements that enable scalable exploration of complex data. Next, we introduce extensions to the parallel coordinates plot, which include new axis representations for numerical, temporal, categorical, and image data, an embedded bivariate axis option, dynamic selections, focus+context axis scaling, and graphical indicators of key statistical values. We demonstrate the practical effectiveness of CrossVis through two scientific use cases; one focused on understanding neural network image classifications from a genetic engineering project and another involving general exploration of a large and complex data set of historical hurricane observations. We conclude with discussions regarding domain expert feedback, future enhancements to address limitations, and the interdisciplinary process used to design CrossVis.</p></div>","PeriodicalId":100592,"journal":{"name":"Graphics and Visual Computing","volume":"3 ","pages":"Article 200013"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.gvc.2020.200013","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"92118307","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Real-time non-photorealistic animation for immersive storytelling in “Age of Sail” 《风帆时代》中沉浸式叙事的实时非真实感动画
Pub Date : 2020-06-01 DOI: 10.1016/j.cagx.2019.100012
Cassidy Curtis , Kevin Dart , Theresa Latzko , John Kahrs

Immersive media such as virtual and augmented reality pose some interesting new challenges for non-photorealistic animation: we must not only balance the screen-space rules of a 2D visual style against 3D motion coherence, but also account for stereo spatialization and interactive camera movement, at a rate of 90 frames per second. We introduce two new real-time rendering techniques: MetaTexture, an example-based texturing method that adheres to the movement of 3D geometry while preserving the texture’s screen-space characteristics, and Edge Breakup, a method for roughening edges by warping with structured noise. We also describe a custom rendering pipeline featuring art-directable coloring, shadow filtering, and texture indication, and our approach to animating and rendering a painterly ocean in real time. We show how we have used these techniques to achieve the “moving illustration” style of the real-time immersive short film “Age of Sail”.

虚拟现实和增强现实等沉浸式媒体为非真实感动画带来了一些有趣的新挑战:我们不仅要平衡2D视觉风格与3D运动一致性的屏幕空间规则,还要考虑立体空间化和交互式相机运动,以每秒90帧的速率。我们介绍了两种新的实时渲染技术:MetaTexture,一种基于示例的纹理方法,坚持3D几何形状的运动,同时保留纹理的屏幕空间特征;Edge decomposition,一种通过使用结构化噪声扭曲来粗糙边缘的方法。我们还描述了一个自定义渲染管道,具有可艺术指导的着色,阴影过滤和纹理指示,以及我们实时动画和渲染绘画海洋的方法。我们展示了我们如何使用这些技术来实现实时沉浸式短片“航行时代”的“移动插图”风格。
{"title":"Real-time non-photorealistic animation for immersive storytelling in “Age of Sail”","authors":"Cassidy Curtis ,&nbsp;Kevin Dart ,&nbsp;Theresa Latzko ,&nbsp;John Kahrs","doi":"10.1016/j.cagx.2019.100012","DOIUrl":"https://doi.org/10.1016/j.cagx.2019.100012","url":null,"abstract":"<div><p>Immersive media such as virtual and augmented reality pose some interesting new challenges for non-photorealistic animation: we must not only balance the screen-space rules of a 2D visual style against 3D motion coherence, but also account for stereo spatialization and interactive camera movement, at a rate of 90 frames per second. We introduce two new real-time rendering techniques: MetaTexture, an example-based texturing method that adheres to the movement of 3D geometry while preserving the texture’s screen-space characteristics, and Edge Breakup, a method for roughening edges by warping with structured noise. We also describe a custom rendering pipeline featuring art-directable coloring, shadow filtering, and texture indication, and our approach to animating and rendering a painterly ocean in real time. We show how we have used these techniques to achieve the “moving illustration” style of the real-time immersive short film “<em>Age of Sail</em>”.</p></div>","PeriodicalId":100592,"journal":{"name":"Graphics and Visual Computing","volume":"3 ","pages":"Article 100012"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.cagx.2019.100012","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91969696","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Graphics and Visual Computing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1