首页 > 最新文献

IEEE Transactions on Visualization and Computer Graphics最新文献

英文 中文
V4D: Voxel for 4D Novel View Synthesis V4D:用于4D新视图合成的体素
IF 5.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2022-05-28 DOI: 10.48550/arXiv.2205.14332
Wanshui Gan, Hongbin Xu, Yi Huang, Shifeng Chen, N. Yokoya
Neural radiance fields have made a remarkable breakthrough in the novel view synthesis task at the 3D static scene. However, for the 4D circumstance (e.g., dynamic scene), the performance of the existing method is still limited by the capacity of the neural network, typically in a multilayer perceptron network (MLP). In this paper, we utilize 3D Voxel to model the 4D neural radiance field, short as V4D, where the 3D voxel has two formats. The first one is to regularly model the 3D space and then use the sampled local 3D feature with the time index to model the density field and the texture field by a tiny MLP. The second one is in look-up tables (LUTs) format that is for the pixel-level refinement, where the pseudo-surface produced by the volume rendering is utilized as the guidance information to learn a 2D pixel-level refinement mapping. The proposed LUTs-based refinement module achieves the performance gain with little computational cost and could serve as the plug-and-play module in the novel view synthesis task. Moreover, we propose a more effective conditional positional encoding toward the 4D data that achieves performance gain with negligible computational burdens. Extensive experiments demonstrate that the proposed method achieves state-of-the-art performance at a low computational cost. The relevant code is available in https://github.com/GANWANSHUI/V4D.
神经辐射场在三维静态场景的新型视图合成任务中取得了显著的突破。然而,对于4D环境(如动态场景),现有方法的性能仍然受到神经网络容量的限制,特别是在多层感知器网络(MLP)中。在本文中,我们利用3D体素来建模4D神经辐射场,简称V4D,其中3D体素有两种格式。第一种方法是对三维空间进行规则建模,然后利用采样的局部三维特征和时间索引,通过一个微小的MLP对密度场和纹理场进行建模。第二种是用于像素级细化的查找表(LUTs)格式,其中利用体渲染产生的伪表面作为指导信息来学习2D像素级细化映射。提出的基于lut的优化模块以较小的计算成本实现了性能提升,可以作为新型视图合成任务的即插即用模块。此外,我们提出了一种更有效的4D数据条件位置编码,在计算负担可以忽略不计的情况下实现性能提升。大量的实验表明,该方法以较低的计算成本达到了最先进的性能。相关代码可在https://github.com/GANWANSHUI/V4D中获得。
{"title":"V4D: Voxel for 4D Novel View Synthesis","authors":"Wanshui Gan, Hongbin Xu, Yi Huang, Shifeng Chen, N. Yokoya","doi":"10.48550/arXiv.2205.14332","DOIUrl":"https://doi.org/10.48550/arXiv.2205.14332","url":null,"abstract":"Neural radiance fields have made a remarkable breakthrough in the novel view synthesis task at the 3D static scene. However, for the 4D circumstance (e.g., dynamic scene), the performance of the existing method is still limited by the capacity of the neural network, typically in a multilayer perceptron network (MLP). In this paper, we utilize 3D Voxel to model the 4D neural radiance field, short as V4D, where the 3D voxel has two formats. The first one is to regularly model the 3D space and then use the sampled local 3D feature with the time index to model the density field and the texture field by a tiny MLP. The second one is in look-up tables (LUTs) format that is for the pixel-level refinement, where the pseudo-surface produced by the volume rendering is utilized as the guidance information to learn a 2D pixel-level refinement mapping. The proposed LUTs-based refinement module achieves the performance gain with little computational cost and could serve as the plug-and-play module in the novel view synthesis task. Moreover, we propose a more effective conditional positional encoding toward the 4D data that achieves performance gain with negligible computational burdens. Extensive experiments demonstrate that the proposed method achieves state-of-the-art performance at a low computational cost. The relevant code is available in https://github.com/GANWANSHUI/V4D.","PeriodicalId":13376,"journal":{"name":"IEEE Transactions on Visualization and Computer Graphics","volume":" ","pages":""},"PeriodicalIF":5.2,"publicationDate":"2022-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45880685","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Strolling in Room-Scale VR: Hex-Core-MK1 Omnidirectional Treadmill 室内漫游VR:Hex-Core-MK1全方位跑步机
IF 5.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2022-04-18 DOI: 10.48550/arXiv.2204.08437
Ziyao Wang, Chiyi Liu, Jialiang Chen, Yao Yao, Dazheng Fang, Zhiyi Shi, Rui Yan, Yiye Wang, Kanjian Zhang, Hai Wang, Haikun Wei
The natural locomotion interface is critical to the development of many VR applications. For household VR applications, there are two basic requirements: natural immersive experience and minimized space occupation. The existing locomotion strategies generally do not simultaneously satisfy these two requirements well. This paper presents a novel omnidirectional treadmill (ODT) system named Hex-Core-MK1 (HCMK1). By implementing two kinds of mirror-symmetrical spiral rollers to generate the omnidirectional velocity field, this proposed system is capable of providing real walking experiences with a full-degree of freedom in an area as small as 1.76 m2, while delivering great advantages over several existing ODT systems in terms of weight, volume, latency and dynamic performance. Compared with the sizes of Infinadeck and HCP, the two best motor-driven ODTs so far, the 8 cm height of HCMK1 is only 20% of Infinadeck and 50% of HCP. In addition, HCMK1 is a lightweight device weighing only 110 kg, which provides possibilities for further expanding VR scenarios, such as terrain simulation. The system latency of HCMK1 is only 9ms. The experiments show that HCMK1 can deliver a starting acceleration of 16.00 m/s2 and a braking acceleration of 30.00 m/s2.
自然运动界面对许多VR应用程序的开发至关重要。对于家用VR应用,有两个基本要求:自然的沉浸式体验和最小化的空间占用。现有的运动策略通常不能很好地同时满足这两个要求。本文提出了一种新型的全向跑步机(ODT)系统,命名为Hex-Core-MK1(HCMK1)。通过实现两种镜像对称的螺旋滚轴来产生全向速度场,该系统能够在小到1.76平方米的区域内提供全自由度的真实步行体验,同时在重量、体积、延迟和动态性能方面比现有的几种ODT系统具有很大的优势。与迄今为止最好的两种电机驱动ODT Infinadeck和HCP的尺寸相比,HCMK1的8cm高度仅为Infinadeck的20%和HCP 50%。此外,HCMK1是一款重量仅为110公斤的轻型设备,为进一步扩展VR场景(如地形模拟)提供了可能性。HCMK1的系统延迟只有9ms。实验表明,HCMK1的起动加速度为16.00m/s2,制动加速度为30.00m/s2。
{"title":"Strolling in Room-Scale VR: Hex-Core-MK1 Omnidirectional Treadmill","authors":"Ziyao Wang, Chiyi Liu, Jialiang Chen, Yao Yao, Dazheng Fang, Zhiyi Shi, Rui Yan, Yiye Wang, Kanjian Zhang, Hai Wang, Haikun Wei","doi":"10.48550/arXiv.2204.08437","DOIUrl":"https://doi.org/10.48550/arXiv.2204.08437","url":null,"abstract":"The natural locomotion interface is critical to the development of many VR applications. For household VR applications, there are two basic requirements: natural immersive experience and minimized space occupation. The existing locomotion strategies generally do not simultaneously satisfy these two requirements well. This paper presents a novel omnidirectional treadmill (ODT) system named Hex-Core-MK1 (HCMK1). By implementing two kinds of mirror-symmetrical spiral rollers to generate the omnidirectional velocity field, this proposed system is capable of providing real walking experiences with a full-degree of freedom in an area as small as 1.76 m2, while delivering great advantages over several existing ODT systems in terms of weight, volume, latency and dynamic performance. Compared with the sizes of Infinadeck and HCP, the two best motor-driven ODTs so far, the 8 cm height of HCMK1 is only 20% of Infinadeck and 50% of HCP. In addition, HCMK1 is a lightweight device weighing only 110 kg, which provides possibilities for further expanding VR scenarios, such as terrain simulation. The system latency of HCMK1 is only 9ms. The experiments show that HCMK1 can deliver a starting acceleration of 16.00 m/s2 and a braking acceleration of 30.00 m/s2.","PeriodicalId":13376,"journal":{"name":"IEEE Transactions on Visualization and Computer Graphics","volume":" ","pages":""},"PeriodicalIF":5.2,"publicationDate":"2022-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48448721","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Efficient Reflectance Capture with a Deep Gated Mixture-of-Experts 使用深度门控专家混合的高效反射捕获
IF 5.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2022-03-29 DOI: 10.48550/arXiv.2203.15258
Xiaohe Ma, Ya-Qi Yu, Hongzhi Wu, Kun Zhou
We present a novel framework to efficiently acquire anisotropic reflectance in a pixel-independent fashion, using a deep gated mixture-of-experts. While existing work employs a unified network to handle all possible input, our network automatically learns to condition on the input for enhanced reconstruction. We train a gating module that takes photometric measurements as input and selects one out of a number of specialized decoders for reflectance reconstruction, essentially trading generality for quality. A common pre-trained latent-transform module is also appended to each decoder, to offset the burden of the increased number of decoders. In addition, the illumination conditions during acquisition can be jointly optimized. The effectiveness of our framework is validated on a wide variety of challenging near-planar samples with a lightstage. Compared with the state-of-the-art technique, our quality is improved with the same number of input images, and our input image number can be reduced to about 1/3 for equal-quality results. We further generalize the framework to enhance a state-of-the-art technique on non-planar reflectance scanning.
我们提出了一种新的框架,利用深度门控混合专家,以像素无关的方式有效地获取各向异性反射率。现有的工作采用统一的网络来处理所有可能的输入,而我们的网络会自动学习对输入进行条件调整以增强重建。我们训练了一个门控模块,它将光度测量作为输入,并从许多专门的解码器中选择一个用于反射率重建,本质上是为了质量而交换通用性。每个解码器还附加了一个通用的预训练的潜在变换模块,以抵消解码器数量增加带来的负担。此外,还可以对采集过程中的照明条件进行联合优化。我们的框架的有效性在各种具有挑战性的近平面样品和光舞台上得到了验证。与最先进的技术相比,在相同数量的输入图像的情况下,我们的质量得到了提高,并且我们的输入图像数量可以减少到1/3左右,从而获得相同质量的结果。我们进一步推广了该框架,以增强非平面反射扫描的最新技术。
{"title":"Efficient Reflectance Capture with a Deep Gated Mixture-of-Experts","authors":"Xiaohe Ma, Ya-Qi Yu, Hongzhi Wu, Kun Zhou","doi":"10.48550/arXiv.2203.15258","DOIUrl":"https://doi.org/10.48550/arXiv.2203.15258","url":null,"abstract":"We present a novel framework to efficiently acquire anisotropic reflectance in a pixel-independent fashion, using a deep gated mixture-of-experts. While existing work employs a unified network to handle all possible input, our network automatically learns to condition on the input for enhanced reconstruction. We train a gating module that takes photometric measurements as input and selects one out of a number of specialized decoders for reflectance reconstruction, essentially trading generality for quality. A common pre-trained latent-transform module is also appended to each decoder, to offset the burden of the increased number of decoders. In addition, the illumination conditions during acquisition can be jointly optimized. The effectiveness of our framework is validated on a wide variety of challenging near-planar samples with a lightstage. Compared with the state-of-the-art technique, our quality is improved with the same number of input images, and our input image number can be reduced to about 1/3 for equal-quality results. We further generalize the framework to enhance a state-of-the-art technique on non-planar reflectance scanning.","PeriodicalId":13376,"journal":{"name":"IEEE Transactions on Visualization and Computer Graphics","volume":" ","pages":""},"PeriodicalIF":5.2,"publicationDate":"2022-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48268495","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Revisiting the Design Patterns of Composite Visualizations 复合可视化设计模式再探
IF 5.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2022-03-20 DOI: 10.48550/arXiv.2203.10476
Dazhen Deng, Weiwei Cui, Xiyu Meng, Mengye Xu, Yu Liao, Haidong Zhang, Yingcai Wu
Composite visualization is a popular design strategy that represents complex datasets by integrating multiple visualizations in a meaningful and aesthetic layout, such as juxtaposition, overlay, and nesting. With this strategy, numerous novel designs have been proposed in visualization publications to accomplish various visual analytic tasks. However, there is a lack of understanding of design patterns of composite visualization, thus failing to provide holistic design space and concrete examples for practical use. In this paper, we opted to revisit the composite visualizations in IEEE VIS publications and answered what and how visualizations of different types are composed together. To achieve this, we first constructed a corpus of composite visualizations from the publications and analyzed common practices, such as the pattern distributions and co-occurrence of visualization types. From the analysis, we obtained insights into different design patterns on the utilities and their potential pros and cons. Furthermore, we discussed usage scenarios of our taxonomy and corpus and how future research on visualization composition can be conducted on the basis of this study.
复合可视化是一种流行的设计策略,通过在有意义和美观的布局中集成多个可视化来表示复杂的数据集,如并置、覆盖和嵌套。通过这种策略,可视化出版物中提出了许多新颖的设计,以完成各种可视化分析任务。然而,人们对复合可视化的设计模式缺乏了解,从而无法提供整体的设计空间和具体的实例供实际使用。在本文中,我们选择重新审视IEEE VIS出版物中的复合可视化,并回答了不同类型的可视化是如何组合在一起的。为了实现这一点,我们首先从出版物中构建了一个复合可视化语料库,并分析了常见的实践,如可视化类型的模式分布和共现。通过分析,我们深入了解了公用设施的不同设计模式及其潜在的利弊。此外,我们还讨论了我们的分类法和语料库的使用场景,以及如何在本研究的基础上进行未来的可视化合成研究。
{"title":"Revisiting the Design Patterns of Composite Visualizations","authors":"Dazhen Deng, Weiwei Cui, Xiyu Meng, Mengye Xu, Yu Liao, Haidong Zhang, Yingcai Wu","doi":"10.48550/arXiv.2203.10476","DOIUrl":"https://doi.org/10.48550/arXiv.2203.10476","url":null,"abstract":"Composite visualization is a popular design strategy that represents complex datasets by integrating multiple visualizations in a meaningful and aesthetic layout, such as juxtaposition, overlay, and nesting. With this strategy, numerous novel designs have been proposed in visualization publications to accomplish various visual analytic tasks. However, there is a lack of understanding of design patterns of composite visualization, thus failing to provide holistic design space and concrete examples for practical use. In this paper, we opted to revisit the composite visualizations in IEEE VIS publications and answered what and how visualizations of different types are composed together. To achieve this, we first constructed a corpus of composite visualizations from the publications and analyzed common practices, such as the pattern distributions and co-occurrence of visualization types. From the analysis, we obtained insights into different design patterns on the utilities and their potential pros and cons. Furthermore, we discussed usage scenarios of our taxonomy and corpus and how future research on visualization composition can be conducted on the basis of this study.","PeriodicalId":13376,"journal":{"name":"IEEE Transactions on Visualization and Computer Graphics","volume":" ","pages":""},"PeriodicalIF":5.2,"publicationDate":"2022-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43462263","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
DrawingInStyles: Portrait Image Generation and Editing with Spatially Conditioned StyleGAN drawinginstyle:肖像图像生成和编辑与空间条件的风格
IF 5.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2022-03-05 DOI: 10.48550/arXiv.2203.02762
Wanchao Su, Hui Ye, Shu-Yu Chen, Lin Gao, Hongbo Fu
The research topic of sketch-to-portrait generation has witnessed a boost of progress with deep learning techniques. The recently proposed StyleGAN architectures achieve state-of-the-art generation ability but the original StyleGAN is not friendly for sketch-based creation due to its unconditional generation nature. To address this issue, we propose a direct conditioning strategy to better preserve the spatial information under the StyleGAN framework. Specifically, we introduce Spatially Conditioned StyleGAN (SC-StyleGAN for short), which explicitly injects spatial constraints to the original StyleGAN generation process. We explore two input modalities, sketches and semantic maps, which together allow users to express desired generation results more precisely and easily. Based on SC-StyleGAN, we present DrawingInStyles, a novel drawing interface for non-professional users to easily produce high-quality, photo-realistic face images with precise control, either from scratch or editing existing ones. Qualitative and quantitative evaluations show the superior generation ability of our method to existing and alternative solutions. The usability and expressiveness of our system are confirmed by a user study.
随着深度学习技术的发展,素描到肖像生成的研究课题取得了长足的进展。最近提出的StyleGAN架构实现了最先进的生成能力,但最初的StyleGAN由于其无条件生成的性质而不适合基于草图的创作。为了解决这一问题,我们提出了一种直接条件反射策略,以更好地保存StyleGAN框架下的空间信息。具体来说,我们引入了空间条件StyleGAN(简称SC-StyleGAN),它明确地将空间约束注入到原始StyleGAN生成过程中。我们探索了两种输入方式,草图和语义图,它们一起允许用户更精确、更容易地表达所需的生成结果。基于SC-StyleGAN,我们提出了drawinginstyle,一个新颖的绘图界面,非专业用户可以轻松地产生高质量的,具有精确控制的逼真的面部图像,无论是从头开始还是编辑现有的。定性和定量评价表明,我们的方法对现有的和替代的解决方案具有优越的生成能力。通过用户研究,验证了系统的可用性和表达性。
{"title":"DrawingInStyles: Portrait Image Generation and Editing with Spatially Conditioned StyleGAN","authors":"Wanchao Su, Hui Ye, Shu-Yu Chen, Lin Gao, Hongbo Fu","doi":"10.48550/arXiv.2203.02762","DOIUrl":"https://doi.org/10.48550/arXiv.2203.02762","url":null,"abstract":"The research topic of sketch-to-portrait generation has witnessed a boost of progress with deep learning techniques. The recently proposed StyleGAN architectures achieve state-of-the-art generation ability but the original StyleGAN is not friendly for sketch-based creation due to its unconditional generation nature. To address this issue, we propose a direct conditioning strategy to better preserve the spatial information under the StyleGAN framework. Specifically, we introduce Spatially Conditioned StyleGAN (SC-StyleGAN for short), which explicitly injects spatial constraints to the original StyleGAN generation process. We explore two input modalities, sketches and semantic maps, which together allow users to express desired generation results more precisely and easily. Based on SC-StyleGAN, we present DrawingInStyles, a novel drawing interface for non-professional users to easily produce high-quality, photo-realistic face images with precise control, either from scratch or editing existing ones. Qualitative and quantitative evaluations show the superior generation ability of our method to existing and alternative solutions. The usability and expressiveness of our system are confirmed by a user study.","PeriodicalId":13376,"journal":{"name":"IEEE Transactions on Visualization and Computer Graphics","volume":" ","pages":"1-1"},"PeriodicalIF":5.2,"publicationDate":"2022-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47627988","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Distance Perception in Virtual Reality: A Meta-Analysis of the Effect of Head-Mounted Display Characteristics. 虚拟现实中的距离感知:头戴式显示器特性影响的元分析。
IF 5.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2022-02-12 DOI: 10.31234/osf.io/6fps2
Jonathan W. Kelly
Distances are commonly underperceived in virtual reality (VR), and this finding has been documented repeatedly over more than two decades of research. Yet, there is evidence that perceived distance is more accurate in modern compared to older head-mounted displays (HMDs). This meta-analysis of 131 studies describes egocentric distance perception across 20 HMDs, and also examines the relationship between perceived distance and technical HMD characteristics. Judged distance was positively associated with HMD field of view (FOV), positively associated with HMD resolution, and negatively associated with HMD weight. The effects of FOV and resolution were more pronounced among heavier HMDs. These findings suggest that future improvements in these technical characteristics may be central to resolving the problem of distance underperception in VR.
在虚拟现实(VR)中,距离通常被低估,这一发现在20多年的研究中被反复记录。然而,有证据表明,与老式的头戴式显示器(HMD)相比,现代人的感知距离更准确。这项对131项研究的荟萃分析描述了20种HMD中以自我为中心的距离感知,并考察了感知距离与HMD技术特征之间的关系。判断距离与HMD视野(FOV)呈正相关,与HMD分辨率呈正相关,而与HMD重量呈负相关。FOV和分辨率的影响在较重的HMD中更为明显。这些发现表明,未来这些技术特征的改进可能是解决虚拟现实中距离感知不足问题的核心。
{"title":"Distance Perception in Virtual Reality: A Meta-Analysis of the Effect of Head-Mounted Display Characteristics.","authors":"Jonathan W. Kelly","doi":"10.31234/osf.io/6fps2","DOIUrl":"https://doi.org/10.31234/osf.io/6fps2","url":null,"abstract":"Distances are commonly underperceived in virtual reality (VR), and this finding has been documented repeatedly over more than two decades of research. Yet, there is evidence that perceived distance is more accurate in modern compared to older head-mounted displays (HMDs). This meta-analysis of 131 studies describes egocentric distance perception across 20 HMDs, and also examines the relationship between perceived distance and technical HMD characteristics. Judged distance was positively associated with HMD field of view (FOV), positively associated with HMD resolution, and negatively associated with HMD weight. The effects of FOV and resolution were more pronounced among heavier HMDs. These findings suggest that future improvements in these technical characteristics may be central to resolving the problem of distance underperception in VR.","PeriodicalId":13376,"journal":{"name":"IEEE Transactions on Visualization and Computer Graphics","volume":" ","pages":""},"PeriodicalIF":5.2,"publicationDate":"2022-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49640191","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
How Does Automation Shape the Process of Narrative Visualization: A Survey on Tools 自动化如何塑造叙事可视化的过程:对工具的调查
IF 5.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2022-01-01 DOI: 10.48550/arXiv.2206.12118
Qing Chen, Shixiong Cao, Jiazhe Wang, Nan Cao
—In recent years, narrative visualization has gained a lot of attention. Researchers have proposed different design spaces for various narrative visualization types and scenarios to facilitate the creation process. As users’ needs grow and automation technologies advance, more and more tools have been designed and developed. In this paper, we surveyed 122 papers and tools to study how automation can progressively engage in the visualization design and narrative process. By investigating the narrative strengths and the drawing efforts of various visualizations, we created a two-dimensional coordinate to map different visualization types. Our resulting taxonomy is organized by the seven types of narrative visualization on the +x-axis of the coordinate and the four automation levels (i.e., design space, authoring tool, AI-supported tool, and AI-generator tool) we identified from the collected work. The taxonomy aims to provide an overview of current research and development in the automation involvement of narrative visualization tools. We discuss key research problems in each category and suggest new opportunities to encourage further research in the related domain.
近年来,叙事可视化获得了很多关注。研究者针对不同的叙事可视化类型和场景提出了不同的设计空间,以促进创作过程。随着用户需求的增长和自动化技术的进步,越来越多的工具被设计和开发出来。在本文中,我们调查了122篇论文和工具来研究自动化如何逐步参与可视化设计和叙事过程。通过调查各种可视化的叙事优势和绘制努力,我们创建了一个二维坐标来映射不同的可视化类型。我们最终的分类是由坐标+x轴上的七种叙事可视化类型和我们从收集的工作中确定的四个自动化级别(即设计空间,创作工具,ai支持工具和ai生成器工具)组织的。该分类法旨在概述当前在叙事可视化工具自动化方面的研究和发展。我们讨论了每个类别的关键研究问题,并提出了鼓励相关领域进一步研究的新机会。
{"title":"How Does Automation Shape the Process of Narrative Visualization: A Survey on Tools","authors":"Qing Chen, Shixiong Cao, Jiazhe Wang, Nan Cao","doi":"10.48550/arXiv.2206.12118","DOIUrl":"https://doi.org/10.48550/arXiv.2206.12118","url":null,"abstract":"—In recent years, narrative visualization has gained a lot of attention. Researchers have proposed different design spaces for various narrative visualization types and scenarios to facilitate the creation process. As users’ needs grow and automation technologies advance, more and more tools have been designed and developed. In this paper, we surveyed 122 papers and tools to study how automation can progressively engage in the visualization design and narrative process. By investigating the narrative strengths and the drawing efforts of various visualizations, we created a two-dimensional coordinate to map different visualization types. Our resulting taxonomy is organized by the seven types of narrative visualization on the +x-axis of the coordinate and the four automation levels (i.e., design space, authoring tool, AI-supported tool, and AI-generator tool) we identified from the collected work. The taxonomy aims to provide an overview of current research and development in the automation involvement of narrative visualization tools. We discuss key research problems in each category and suggest new opportunities to encourage further research in the related domain.","PeriodicalId":13376,"journal":{"name":"IEEE Transactions on Visualization and Computer Graphics","volume":"8 1 1","pages":""},"PeriodicalIF":5.2,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"70568051","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
2021 VGTC Visualization Significant New Researcher Award—Michelle Borkin, Northeastern University and Benjamin Bach, University of Edinburgh 2021年VGTC可视化重要新研究员奖-东北大学的michelle Borkin和爱丁堡大学的Benjamin Bach
IF 5.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2022-01-01 DOI: 10.1109/tvcg.2021.3114605
{"title":"2021 VGTC Visualization Significant New Researcher Award—Michelle Borkin, Northeastern University and Benjamin Bach, University of Edinburgh","authors":"","doi":"10.1109/tvcg.2021.3114605","DOIUrl":"https://doi.org/10.1109/tvcg.2021.3114605","url":null,"abstract":"","PeriodicalId":13376,"journal":{"name":"IEEE Transactions on Visualization and Computer Graphics","volume":"1 1","pages":""},"PeriodicalIF":5.2,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62600291","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Kine-Appendage: Enhancing Freehand VR Interaction Through Transformations of Virtual Appendages. Kine附件:通过虚拟附件的转换来增强徒手VR交互。
IF 5.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2021-12-13 DOI: 10.36227/techrxiv.17152460.v1
Hualong Bai, Yang Tian, Shengdong Zhao, Chi-Wing Fu, Qiong Wang, P. Heng
Kinesthetic feedback, the feeling of restriction or resistance when hands contact objects, is essential for natural freehand interaction in VR. However, inducing kinesthetic feedback using mechanical hardware can be cumbersome and hard to control in commodity VR systems. We propose the kine-appendage concept to compensate for the loss of kinesthetic feedback in virtual environments, i.e., a virtual appendage is added to the user's avatar hand; when the appendage contacts a virtual object, it exhibits transformations (rotation and deformation); when it disengages from the contact, it recovers its original appearance. A proof-of-concept kine-appendage technique, BrittleStylus, was designed to enhance isomorphic typing. Our empirical evaluations demonstrated that (i) BrittleStylus significantly reduced the uncorrected error rate of naive isomorphic typing from 6.53% to 1.92% without compromising the typing speed; (ii) BrittleStylus could induce the sense of kinesthetic feedback, the degree of which was parity with that induced by pseudo-haptic (+ visual cue) methods; and (iii) participants preferred BrittleStylus over pseudo-haptic (+ visual cue) methods because of not only good performance but also fluent hand movements.
动觉反馈,即手接触物体时的限制或阻力感,对于VR中自然的徒手交互至关重要。然而,在商品VR系统中,使用机械硬件诱导动觉反馈可能很麻烦且难以控制。为了弥补虚拟环境中失去的动觉反馈,我们提出了一个虚拟附属物的概念,即在用户的虚拟手上添加一个虚拟附属物;当附属物接触虚拟物体时,它会发生变换(旋转和变形);当它脱离触点时,它就恢复了原来的样子。一种概念验证的kin- appendage技术,BrittleStylus,被设计用来增强同构类型。结果表明:(1)在不影响输入速度的情况下,BrittleStylus显著降低了原始同构打字的未校正错误率,从6.53%降至1.92%;(ii) BrittleStylus可以诱导运动反馈,其程度与伪触觉(+视觉线索)方法诱导的程度相当;与伪触觉(+视觉提示)方法相比,参与者更喜欢BrittleStylus,因为不仅性能好,而且手部动作流畅。
{"title":"Kine-Appendage: Enhancing Freehand VR Interaction Through Transformations of Virtual Appendages.","authors":"Hualong Bai, Yang Tian, Shengdong Zhao, Chi-Wing Fu, Qiong Wang, P. Heng","doi":"10.36227/techrxiv.17152460.v1","DOIUrl":"https://doi.org/10.36227/techrxiv.17152460.v1","url":null,"abstract":"Kinesthetic feedback, the feeling of restriction or resistance when hands contact objects, is essential for natural freehand interaction in VR. However, inducing kinesthetic feedback using mechanical hardware can be cumbersome and hard to control in commodity VR systems. We propose the kine-appendage concept to compensate for the loss of kinesthetic feedback in virtual environments, i.e., a virtual appendage is added to the user's avatar hand; when the appendage contacts a virtual object, it exhibits transformations (rotation and deformation); when it disengages from the contact, it recovers its original appearance. A proof-of-concept kine-appendage technique, BrittleStylus, was designed to enhance isomorphic typing. Our empirical evaluations demonstrated that (i) BrittleStylus significantly reduced the uncorrected error rate of naive isomorphic typing from 6.53% to 1.92% without compromising the typing speed; (ii) BrittleStylus could induce the sense of kinesthetic feedback, the degree of which was parity with that induced by pseudo-haptic (+ visual cue) methods; and (iii) participants preferred BrittleStylus over pseudo-haptic (+ visual cue) methods because of not only good performance but also fluent hand movements.","PeriodicalId":13376,"journal":{"name":"IEEE Transactions on Visualization and Computer Graphics","volume":" ","pages":""},"PeriodicalIF":5.2,"publicationDate":"2021-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47985780","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Remote research on locomotion interfaces for virtual reality: Replication of a lab-based study on teleporting interfaces 虚拟现实中移动接口的远程研究:基于实验室的远程传输接口研究的复制
IF 5.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2021-12-03 DOI: 10.31234/osf.io/wqcuf
Jonathan W. Kelly, Melynda Hoover, Taylor A. Doty, A. Renner, L. Cherep, Stephen B Gilbert
The wide availability of consumer-oriented virtual reality (VR) equipment has enabled researchers to recruit existing VR owners to participate remotely using their own equipment. Yet, there are many differences between lab environments and home environments, as well as differences between participant samples recruited for lab studies and remote studies. This paper replicates a lab-based experiment on VR locomotion interfaces using a remote sample. Participants completed a triangle-completion task (travel two path legs, then point to the path origin) using their own VR equipment in a remote, unsupervised setting. Locomotion was accomplished using two versions of the teleporting interface varying in availability of rotational self-motion cues. The size of the traveled path and the size of the surrounding virtual environment were also manipulated. Results from remote participants largely mirrored lab results, with overall better performance when rotational self-motion cues were available. Some differences also occurred, including a tendency for remote participants to rely less on nearby landmarks, perhaps due to increased competence with using the teleporting interface to update self-location. This replication study provides insight for VR researchers on aspects of lab studies that may or may not replicate remotely.
面向消费者的虚拟现实(VR)设备的广泛可用性使研究人员能够招募现有的VR所有者使用他们自己的设备远程参与。然而,在实验室环境和家庭环境之间,以及在实验室研究和远程研究中招募的参与者样本之间存在许多差异。本文利用远程样本复制了VR运动接口的实验室实验。参与者在无人监督的远程环境中使用自己的VR设备完成了一个三角形完成任务(走过两条路径,然后指向路径原点)。运动是通过两个版本的传送界面完成的,在旋转自我运动线索的可用性上有所不同。行进路径的大小和周围虚拟环境的大小也被操纵。远程参与者的结果在很大程度上反映了实验室的结果,当有旋转的自我运动提示时,总体上表现更好。也出现了一些差异,包括远程参与者倾向于减少对附近地标的依赖,这可能是由于使用传送界面更新自我定位的能力增强。这项复制研究为VR研究人员提供了关于可能或可能不远程复制的实验室研究方面的见解。
{"title":"Remote research on locomotion interfaces for virtual reality: Replication of a lab-based study on teleporting interfaces","authors":"Jonathan W. Kelly, Melynda Hoover, Taylor A. Doty, A. Renner, L. Cherep, Stephen B Gilbert","doi":"10.31234/osf.io/wqcuf","DOIUrl":"https://doi.org/10.31234/osf.io/wqcuf","url":null,"abstract":"The wide availability of consumer-oriented virtual reality (VR) equipment has enabled researchers to recruit existing VR owners to participate remotely using their own equipment. Yet, there are many differences between lab environments and home environments, as well as differences between participant samples recruited for lab studies and remote studies. This paper replicates a lab-based experiment on VR locomotion interfaces using a remote sample. Participants completed a triangle-completion task (travel two path legs, then point to the path origin) using their own VR equipment in a remote, unsupervised setting. Locomotion was accomplished using two versions of the teleporting interface varying in availability of rotational self-motion cues. The size of the traveled path and the size of the surrounding virtual environment were also manipulated. Results from remote participants largely mirrored lab results, with overall better performance when rotational self-motion cues were available. Some differences also occurred, including a tendency for remote participants to rely less on nearby landmarks, perhaps due to increased competence with using the teleporting interface to update self-location. This replication study provides insight for VR researchers on aspects of lab studies that may or may not replicate remotely.","PeriodicalId":13376,"journal":{"name":"IEEE Transactions on Visualization and Computer Graphics","volume":"13 4","pages":"2037-2046"},"PeriodicalIF":5.2,"publicationDate":"2021-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41306297","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
IEEE Transactions on Visualization and Computer Graphics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1