首页 > 最新文献

Proceedings of the ACM on computer graphics and interactive techniques最新文献

英文 中文
Importance-Based Ray Strategies for Dynamic Diffuse Global Illumination 基于重要性的动态漫射全局照明光线策略
Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-05-12 DOI: 10.1145/3585500
Zihao Liu, Jing Huang, Allan Rocha, Jim Malmros, Jerry Zhang
In this paper, we propose a first and efficient ray allocation technique for Dynamic Diffuse Global Illumination (DDGI) using Multiple Importance Sampling (MIS). Our technique, IS-DDGI, extends DDGI by incorporating a set of importance-based ray strategies that analyze, allocate, and manage ray resources on the GPU. We combine these strategies with an adaptive historical and temporal frame-to-frame analysis for an effective reuse of information and a set of GPU-based optimizations for speeding up ray allocation and reducing memory bandwidth. Our IS-DDGI achieves similar visual quality to DDGI with a speedup of 1.27x to 2.47x in total DDGI time and 3.29x to 6.64x in probes ray tracing time over previous technique [Majercik et al. 2021]. Most speedup of IS-DDGI comes from probes ray tracing speedup.
本文提出了一种基于多重重要采样(Multiple Importance Sampling, MIS)的动态漫射全局照明(DDGI)的首次有效光线分配技术。我们的技术IS-DDGI通过整合一组基于重要性的光线策略来扩展DDGI,这些策略可以分析、分配和管理GPU上的光线资源。我们将这些策略与自适应的历史和时间帧对帧分析相结合,以有效地重用信息和一组基于gpu的优化,以加快光线分配和减少内存带宽。与之前的技术相比,我们的IS-DDGI实现了与DDGI相似的视觉质量,总DDGI时间加快了1.27倍至2.47倍,探针光线追踪时间加快了3.29倍至6.64倍[Majercik等人,2021]。IS-DDGI的大部分加速来自探针射线追踪加速。
{"title":"Importance-Based Ray Strategies for Dynamic Diffuse Global Illumination","authors":"Zihao Liu, Jing Huang, Allan Rocha, Jim Malmros, Jerry Zhang","doi":"10.1145/3585500","DOIUrl":"https://doi.org/10.1145/3585500","url":null,"abstract":"In this paper, we propose a first and efficient ray allocation technique for Dynamic Diffuse Global Illumination (DDGI) using Multiple Importance Sampling (MIS). Our technique, IS-DDGI, extends DDGI by incorporating a set of importance-based ray strategies that analyze, allocate, and manage ray resources on the GPU. We combine these strategies with an adaptive historical and temporal frame-to-frame analysis for an effective reuse of information and a set of GPU-based optimizations for speeding up ray allocation and reducing memory bandwidth. Our IS-DDGI achieves similar visual quality to DDGI with a speedup of 1.27x to 2.47x in total DDGI time and 3.29x to 6.64x in probes ray tracing time over previous technique [Majercik et al. 2021]. Most speedup of IS-DDGI comes from probes ray tracing speedup.","PeriodicalId":74536,"journal":{"name":"Proceedings of the ACM on computer graphics and interactive techniques","volume":" ","pages":"1 - 20"},"PeriodicalIF":0.0,"publicationDate":"2023-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44870970","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fast Position-based Multi-Agent Group Dynamics 快速基于位置的多智能体群体动力学
Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-05-12 DOI: 10.1145/3585507
Tomer Weiss
We present a novel method for simulating groups moving in formation. Recent approaches for simulating group motion operate via forces or velocity-connections. While such approaches are effective for several cases, they do not easily scale to large crowds, irregular formation shapes, and they provide limited fine-grain control over agent and group behaviors. In this paper we propose a novel approach that addresses these difficulties via positional constraints, with a position-based dynamics solver. Our approach allows real-time, interactive simulation of a variety of group numbers, formation shapes, and scenarios of up to thousands of agents.
提出了一种新的模拟群体编队运动的方法。最近模拟群体运动的方法是通过力或速度连接进行的。虽然这种方法在某些情况下是有效的,但它们不容易扩展到大型人群,不规则的队形,并且它们对代理和群体行为提供有限的细粒度控制。在本文中,我们提出了一种新的方法,通过位置约束来解决这些困难,并使用基于位置的动力学求解器。我们的方法允许实时、交互式地模拟各种组数、队形和多达数千个代理的场景。
{"title":"Fast Position-based Multi-Agent Group Dynamics","authors":"Tomer Weiss","doi":"10.1145/3585507","DOIUrl":"https://doi.org/10.1145/3585507","url":null,"abstract":"We present a novel method for simulating groups moving in formation. Recent approaches for simulating group motion operate via forces or velocity-connections. While such approaches are effective for several cases, they do not easily scale to large crowds, irregular formation shapes, and they provide limited fine-grain control over agent and group behaviors. In this paper we propose a novel approach that addresses these difficulties via positional constraints, with a position-based dynamics solver. Our approach allows real-time, interactive simulation of a variety of group numbers, formation shapes, and scenarios of up to thousands of agents.","PeriodicalId":74536,"journal":{"name":"Proceedings of the ACM on computer graphics and interactive techniques","volume":"6 1","pages":"1 - 15"},"PeriodicalIF":0.0,"publicationDate":"2023-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47559795","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Efficient Spatial Resampling Using the PDF Similarity 利用PDF相似性的高效空间重采样
Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-05-12 DOI: 10.1145/3585501
Yusuke Tokuyoshi
In real-time rendering, spatiotemporal reservoir resampling (ReSTIR) is a powerful technique to increase the number of candidate samples for resampled importance sampling. However, reusing spatiotemporal samples is not always efficient when target PDFs for the reused samples are dissimilar to the integrand. Target PDFs are often spatially different for highly detailed scenes due to geometry edges, normal maps, spatially varying materials, and shadow edges. This paper introduces a new method of rejecting spatial reuse based on the similarity of PDF shapes for single-bounce path connections (e.g., direct illumination). While existing rejection methods for ReSTIR do not support arbitrary materials and shadow edges, our PDF similarity takes them into account because target PDFs include BSDFs and shadows. In this paper, we present a rough estimation of PDF shapes using von Mises--Fisher distributions and temporal resampling. We also present a stable combination of our rejection method and the existing rejection method, considering estimation errors due to temporal disocclusions and moving light sources. This combination efficiently reduces the error around shadow edges with temporal continuities. By using our method for a ReSTIR variant that reuses shadow ray visibility for the integrand, we can reduce the number of shadow rays while preserving shadow edges.
在实时绘制中,时空储层重采样(ReSTIR)是一种强大的技术,可以增加重采样重要性采样的候选样本数量。然而,当重用样本的目标PDF与被积函数不同时,重用时空样本并不总是有效的。由于几何体边缘、法线贴图、空间变化的材质和阴影边缘,对于高度详细的场景,目标PDF通常在空间上不同。本文介绍了一种基于单反弹路径连接(如直接照明)PDF形状相似性的拒绝空间重用的新方法。虽然现有的ReSTIR拒绝方法不支持任意材质和阴影边缘,但我们的PDF相似性将其考虑在内,因为目标PDF包括BSDF和阴影。在本文中,我们提出了一个使用冯-米塞斯-费雪分布和时间重采样的PDF形状的粗略估计。我们还提出了我们的抑制方法和现有抑制方法的稳定组合,考虑到由于时间不遮挡和移动光源引起的估计误差。这种组合有效地减少了具有时间连续性的阴影边缘周围的误差。通过将我们的方法用于ReSTIR变体,该变体重用被积函数的阴影光线可见性,我们可以在保留阴影边的同时减少阴影光线的数量。
{"title":"Efficient Spatial Resampling Using the PDF Similarity","authors":"Yusuke Tokuyoshi","doi":"10.1145/3585501","DOIUrl":"https://doi.org/10.1145/3585501","url":null,"abstract":"In real-time rendering, spatiotemporal reservoir resampling (ReSTIR) is a powerful technique to increase the number of candidate samples for resampled importance sampling. However, reusing spatiotemporal samples is not always efficient when target PDFs for the reused samples are dissimilar to the integrand. Target PDFs are often spatially different for highly detailed scenes due to geometry edges, normal maps, spatially varying materials, and shadow edges. This paper introduces a new method of rejecting spatial reuse based on the similarity of PDF shapes for single-bounce path connections (e.g., direct illumination). While existing rejection methods for ReSTIR do not support arbitrary materials and shadow edges, our PDF similarity takes them into account because target PDFs include BSDFs and shadows. In this paper, we present a rough estimation of PDF shapes using von Mises--Fisher distributions and temporal resampling. We also present a stable combination of our rejection method and the existing rejection method, considering estimation errors due to temporal disocclusions and moving light sources. This combination efficiently reduces the error around shadow edges with temporal continuities. By using our method for a ReSTIR variant that reuses shadow ray visibility for the integrand, we can reduce the number of shadow rays while preserving shadow edges.","PeriodicalId":74536,"journal":{"name":"Proceedings of the ACM on computer graphics and interactive techniques","volume":" ","pages":"1 - 19"},"PeriodicalIF":0.0,"publicationDate":"2023-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42827471","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Subspace Culling for Ray-Box Intersection 光线盒相交的子空间消隐
Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-05-12 DOI: 10.1145/3585503
A. Yoshimura, T. Harada
Ray tracing is an essential operation for realistic image synthesis. The acceleration of ray tracing has been studied for a long period of time because algorithms such as light transport simulations require a large amount of ray tracing. One of the major approaches to accelerate the intersections is to use bounding volumes for early pruning for primitives in the volume. The axis-aligned bounding box is a popular bounding volume for ray tracing because of its simplicity and efficiency. However, the conservative bounding volume may produce extra empty space in addition to its content. Especially, primitives that are thin and diagonal to the axis give false-positive hits on the box volume due to the extra space. Although more complex bounding volumes such as oriented bounding boxes may reduce more false-positive hits, they are computationally expensive. In this paper, we propose a novel culling approach to reduce false-positive hits for the bounding box by embedding a binary voxel data structure to the volume. As a ray is represented as a conservative voxel volume as well in our approach, the ray--voxel intersection is cheaply done by bitwise AND operations. Our method is applicable to hierarchical data structures such as bounding volume hierarchy (BVH). It reduces false-positive hits due to the ray--box test and reduces the number of intersections during the traversal of BVH in ray tracing. We evaluate the reduction of intersections with several scenes and show the possibility of performance improvement despite the culling overhead. We also introduce a compression approach with a lookup table for our voxel data. We show that our compressed voxel data achieves significant false-positive reductions with a small amount of memory.
光线跟踪是真实感图像合成的重要操作。光线跟踪的加速已经研究了很长一段时间,因为光传输模拟等算法需要大量的光线跟踪。加速相交的主要方法之一是使用边界体积对体积中的基本体进行早期修剪。轴对齐边界框是光线跟踪的常用边界体,因为它简单高效。然而,保守的边界体积除了其内容之外,还可能产生额外的空白空间。特别是,由于额外的空间,薄且与轴成对角线的基元会在长方体体积上产生假阳性命中。尽管更复杂的边界体积(如定向边界框)可能会减少更多的假阳性命中,但它们在计算上是昂贵的。在本文中,我们提出了一种新的剔除方法,通过在体积中嵌入二进制体素数据结构来减少边界框的假阳性命中。在我们的方法中,由于射线也被表示为保守的体素体积,因此射线与体素的相交可以通过逐位AND运算廉价地完成。我们的方法适用于层次数据结构,如边界体积层次(BVH)。它减少了由于光线盒测试而导致的假阳性命中,并减少了光线跟踪中BVH遍历过程中的交叉点数量。我们评估了几个场景中交叉口的减少,并显示了尽管存在剔除开销,但性能改进的可能性。我们还为我们的体素数据引入了一种具有查找表的压缩方法。我们表明,我们的压缩体素数据在少量内存的情况下实现了显著的假阳性减少。
{"title":"Subspace Culling for Ray-Box Intersection","authors":"A. Yoshimura, T. Harada","doi":"10.1145/3585503","DOIUrl":"https://doi.org/10.1145/3585503","url":null,"abstract":"Ray tracing is an essential operation for realistic image synthesis. The acceleration of ray tracing has been studied for a long period of time because algorithms such as light transport simulations require a large amount of ray tracing. One of the major approaches to accelerate the intersections is to use bounding volumes for early pruning for primitives in the volume. The axis-aligned bounding box is a popular bounding volume for ray tracing because of its simplicity and efficiency. However, the conservative bounding volume may produce extra empty space in addition to its content. Especially, primitives that are thin and diagonal to the axis give false-positive hits on the box volume due to the extra space. Although more complex bounding volumes such as oriented bounding boxes may reduce more false-positive hits, they are computationally expensive. In this paper, we propose a novel culling approach to reduce false-positive hits for the bounding box by embedding a binary voxel data structure to the volume. As a ray is represented as a conservative voxel volume as well in our approach, the ray--voxel intersection is cheaply done by bitwise AND operations. Our method is applicable to hierarchical data structures such as bounding volume hierarchy (BVH). It reduces false-positive hits due to the ray--box test and reduces the number of intersections during the traversal of BVH in ray tracing. We evaluate the reduction of intersections with several scenes and show the possibility of performance improvement despite the culling overhead. We also introduce a compression approach with a lookup table for our voxel data. We show that our compressed voxel data achieves significant false-positive reductions with a small amount of memory.","PeriodicalId":74536,"journal":{"name":"Proceedings of the ACM on computer graphics and interactive techniques","volume":" ","pages":"1 - 15"},"PeriodicalIF":0.0,"publicationDate":"2023-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49637869","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Real-Time Sand Dune Simulation 实时沙丘模拟
Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-05-12 DOI: 10.1145/3585510
B. Taylor, J. Keyser
We present a novel real-time method for simulating aeolian sand transport and dune propagation. Our method is a GPU-based extension of the Desertscapes Simulation sand propagation model to additionally capture echo dunes and obstacle interaction. We validate our method by comparing it against an existing study of echo dune evolution in a wind tunnel environment. Additionally, we demonstrate the significantly improved performance of our method via comparison to the existing, CPU-based method. Lastly, we validate our method by comparing it to a published study exploring the evolution of dunes in a bidirectional wind environment driven by an offline, cellular autonoma based method. We conclude that the presented method is a simple and helpful tool for users in multiple domains who wish to capture physically plausible desertscape evolution in real time.
我们提出了一种新的实时模拟风沙输运和沙丘传播的方法。我们的方法是基于gpu的沙漠景观模拟沙传播模型的扩展,以额外捕获回波沙丘和障碍物的相互作用。我们通过将其与风洞环境中回声沙丘演化的现有研究进行比较来验证我们的方法。此外,通过与现有的基于cpu的方法进行比较,我们证明了我们的方法的性能得到了显著提高。最后,我们通过将其与一项已发表的研究进行比较来验证我们的方法,该研究探索了在双向风环境中沙丘的演变,该研究是由离线的、基于细胞自主的方法驱动的。我们得出的结论是,对于希望实时捕获物理上合理的沙漠景观演变的多个领域的用户来说,所提出的方法是一个简单而有用的工具。
{"title":"Real-Time Sand Dune Simulation","authors":"B. Taylor, J. Keyser","doi":"10.1145/3585510","DOIUrl":"https://doi.org/10.1145/3585510","url":null,"abstract":"We present a novel real-time method for simulating aeolian sand transport and dune propagation. Our method is a GPU-based extension of the Desertscapes Simulation sand propagation model to additionally capture echo dunes and obstacle interaction. We validate our method by comparing it against an existing study of echo dune evolution in a wind tunnel environment. Additionally, we demonstrate the significantly improved performance of our method via comparison to the existing, CPU-based method. Lastly, we validate our method by comparing it to a published study exploring the evolution of dunes in a bidirectional wind environment driven by an offline, cellular autonoma based method. We conclude that the presented method is a simple and helpful tool for users in multiple domains who wish to capture physically plausible desertscape evolution in real time.","PeriodicalId":74536,"journal":{"name":"Proceedings of the ACM on computer graphics and interactive techniques","volume":"67 4","pages":"1 - 18"},"PeriodicalIF":0.0,"publicationDate":"2023-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41269529","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
G2 Blending Ball B-Spline Curve by B-Spline G2用B样条混合球B样条曲线
Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-05-12 DOI: 10.1145/3585504
Yuming Zhao, Zhongke Wu, Xingce Wang, Xinyu Liu
Blending two Ball B-Spline Curves(BBSC) is an important tool in modeling tubular objects. In this paper, we propose a new BBSC blending method. Our method has the following three main contributions: First, we use BBSC instead of ball Bézier to model the blending part to expand the solution space and make the resultant BBSC have better fairness. Second, we consider both the skeleton line and radius of BBSC, which makes the skeleton line and radius consistent. Thirdly, we propose a two-step optimization process to solve the problem of excessive amount of parameters brought by expanding the solution space, so that our method satisfies the real-time.
混合两条b样条曲线(BBSC)是管状物体建模的重要工具。本文提出了一种新的BBSC混合方法。我们的方法有以下三个主要贡献:首先,我们使用BBSC代替ball bsamzier对混合部分进行建模,扩大了解空间,使得到的BBSC具有更好的公平性。其次,我们同时考虑了BBSC的骨架线和半径,使得骨架线和半径一致。再次,我们提出了一个两步优化过程来解决扩展解空间带来的参数过多的问题,使我们的方法满足实时性。
{"title":"G2 Blending Ball B-Spline Curve by B-Spline","authors":"Yuming Zhao, Zhongke Wu, Xingce Wang, Xinyu Liu","doi":"10.1145/3585504","DOIUrl":"https://doi.org/10.1145/3585504","url":null,"abstract":"Blending two Ball B-Spline Curves(BBSC) is an important tool in modeling tubular objects. In this paper, we propose a new BBSC blending method. Our method has the following three main contributions: First, we use BBSC instead of ball Bézier to model the blending part to expand the solution space and make the resultant BBSC have better fairness. Second, we consider both the skeleton line and radius of BBSC, which makes the skeleton line and radius consistent. Thirdly, we propose a two-step optimization process to solve the problem of excessive amount of parameters brought by expanding the solution space, so that our method satisfies the real-time.","PeriodicalId":74536,"journal":{"name":"Proceedings of the ACM on computer graphics and interactive techniques","volume":"6 1","pages":"1 - 16"},"PeriodicalIF":0.0,"publicationDate":"2023-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41798735","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ManiLoco: A VR-Based Locomotion Method for Concurrent Object Manipulation. ManiLoco:一种基于vr的并发对象操作的运动方法。
Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-05-01 DOI: 10.1145/3585502
Dayu Wan, Xiaolei Guo, Jiahui Dong, Christos Mousas, Yingjie Chen

The use of virtual reality (VR) in laboratory skill training is rapidly increasing. In such applications, users often need to explore a large virtual environment within a limited physical space while completing a series of hand-based tasks (e.g., object manipulation). However, the most widely used controller-based teleport methods may conflict with the users' hand operation and result in a higher cognitive load, negatively affecting their training experiences. To alleviate these limitations, we designed and implemented a locomotion method called ManiLoco to enable hands-free interaction and thus avoid conflicts and interruptions from other tasks. Users can teleport to a remote object's position by taking a step toward the object while looking at it. We evaluated ManiLoco and compared it with state-of-the-art Point & Teleport in a within-subject experiment with 16 participants. The results confirmed the viability of our foot- and head-based approach and better support concurrent object manipulation in VR training tasks. Furthermore, our locomotion method does not require any additional hardware. It solely relies on the VR head-mounted display (HMD) and our implementation of detecting the user's stepping activity, and it can be easily applied to any VR application as a plugin.

虚拟现实(VR)在实验室技能培训中的应用正在迅速增加。在这样的应用中,用户通常需要在有限的物理空间内探索一个巨大的虚拟环境,同时完成一系列基于手的任务(例如,对象操作)。然而,最广泛使用的基于控制器的远程传送方法可能与用户的手部操作相冲突,导致更高的认知负荷,对他们的训练体验产生负面影响。为了减轻这些限制,我们设计并实现了一种名为ManiLoco的移动方法,以实现免提交互,从而避免其他任务的冲突和中断。用户可以通过在看物体的时候向它迈出一步来传送到远程物体的位置。我们对ManiLoco进行了评估,并将其与最先进的Point & Teleport进行了比较。结果证实了我们基于脚和头的方法的可行性,并更好地支持VR训练任务中的并发对象操作。此外,我们的移动方法不需要任何额外的硬件。它只依赖于VR头戴式显示器(HMD)和我们检测用户行走活动的实现,它可以很容易地作为插件应用到任何VR应用程序中。
{"title":"ManiLoco: A VR-Based Locomotion Method for Concurrent Object Manipulation.","authors":"Dayu Wan,&nbsp;Xiaolei Guo,&nbsp;Jiahui Dong,&nbsp;Christos Mousas,&nbsp;Yingjie Chen","doi":"10.1145/3585502","DOIUrl":"https://doi.org/10.1145/3585502","url":null,"abstract":"<p><p>The use of virtual reality (VR) in laboratory skill training is rapidly increasing. In such applications, users often need to explore a large virtual environment within a limited physical space while completing a series of hand-based tasks (e.g., object manipulation). However, the most widely used controller-based teleport methods may conflict with the users' hand operation and result in a higher cognitive load, negatively affecting their training experiences. To alleviate these limitations, we designed and implemented a locomotion method called ManiLoco to enable hands-free interaction and thus avoid conflicts and interruptions from other tasks. Users can teleport to a remote object's position by taking a step toward the object while looking at it. We evaluated ManiLoco and compared it with state-of-the-art Point & Teleport in a within-subject experiment with 16 participants. The results confirmed the viability of our foot- and head-based approach and better support concurrent object manipulation in VR training tasks. Furthermore, our locomotion method does not require any additional hardware. It solely relies on the VR head-mounted display (HMD) and our implementation of detecting the user's stepping activity, and it can be easily applied to any VR application as a plugin.</p>","PeriodicalId":74536,"journal":{"name":"Proceedings of the ACM on computer graphics and interactive techniques","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10249678/pdf/nihms-1905180.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9619785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Pixel-wise Guidance for Utilizing Auxiliary Features in Monte Carlo Denoising 蒙特卡罗去噪中辅助特征利用的逐像素指导
Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-04-11 DOI: 10.1145/3585505
Kyu Beom Han, Olivia G. Odenthal, Woo Jae Kim, S.-E. Yoon
Auxiliary features such as geometric buffers (G-buffers) and path descriptors (P-buffers) have been shown to significantly improve Monte Carlo (MC) denoising. However, recent approaches implicitly learn to exploit auxiliary features for denoising, which could lead to insufficient utilization of each type of auxiliary features. To overcome such an issue, we propose a denoising framework that relies on an explicit pixel-wise guidance for utilizing auxiliary features. First, we train two denoisers, each trained by a different auxiliary feature (i.e., G-buffers or P-buffers). Then we design our ensembling network to obtain per-pixel ensembling weight maps, which represent pixel-wise guidance for which auxiliary feature should be dominant at reconstructing each individual pixel and use them to ensemble the two denoised results of our denosiers. We also propagate our pixel-wise guidance to the denoisers by jointly training the denoisers and the ensembling network, further guiding the denoisers to focus on regions where G-buffers or P-buffers are relatively important for denoising. Our result and show considerable improvement in denoising performance compared to the baseline denoising model using both G-buffers and P-buffers. The source code is available at https://github.com/qbhan/GuidanceMCDenoising.
辅助特征,如几何缓冲(g缓冲)和路径描述符(p缓冲)已被证明可以显着改善蒙特卡罗(MC)去噪。然而,最近的方法隐式地学习利用辅助特征进行去噪,这可能导致每种辅助特征的利用不足。为了克服这样的问题,我们提出了一个去噪框架,它依赖于一个明确的像素级指导来利用辅助特征。首先,我们训练两个去噪器,每个去噪器由不同的辅助特征(即G-buffers或P-buffers)训练。然后,我们设计了我们的集成网络,以获得每像素的集成权图,该权图表示逐像素的指导,辅助特征在重建每个单独的像素时应该占主导地位,并使用它们来集成我们的去噪结果。我们还通过联合训练去噪器和集成网络将我们的逐像素制导传播给去噪器,进一步引导去噪器关注g缓冲区或p缓冲区对去噪相对重要的区域。我们的结果显示,与使用g缓冲和p缓冲的基线去噪模型相比,去噪性能有了相当大的改善。源代码可从https://github.com/qbhan/GuidanceMCDenoising获得。
{"title":"Pixel-wise Guidance for Utilizing Auxiliary Features in Monte Carlo Denoising","authors":"Kyu Beom Han, Olivia G. Odenthal, Woo Jae Kim, S.-E. Yoon","doi":"10.1145/3585505","DOIUrl":"https://doi.org/10.1145/3585505","url":null,"abstract":"Auxiliary features such as geometric buffers (G-buffers) and path descriptors (P-buffers) have been shown to significantly improve Monte Carlo (MC) denoising. However, recent approaches implicitly learn to exploit auxiliary features for denoising, which could lead to insufficient utilization of each type of auxiliary features. To overcome such an issue, we propose a denoising framework that relies on an explicit pixel-wise guidance for utilizing auxiliary features. First, we train two denoisers, each trained by a different auxiliary feature (i.e., G-buffers or P-buffers). Then we design our ensembling network to obtain per-pixel ensembling weight maps, which represent pixel-wise guidance for which auxiliary feature should be dominant at reconstructing each individual pixel and use them to ensemble the two denoised results of our denosiers. We also propagate our pixel-wise guidance to the denoisers by jointly training the denoisers and the ensembling network, further guiding the denoisers to focus on regions where G-buffers or P-buffers are relatively important for denoising. Our result and show considerable improvement in denoising performance compared to the baseline denoising model using both G-buffers and P-buffers. The source code is available at https://github.com/qbhan/GuidanceMCDenoising.","PeriodicalId":74536,"journal":{"name":"Proceedings of the ACM on computer graphics and interactive techniques","volume":"6 1","pages":"1 - 19"},"PeriodicalIF":0.0,"publicationDate":"2023-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42627915","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DiffXPBD DiffXPBD
Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-01-04 DOI: 10.1145/3606923
Tuur Stuyck, Hsiao-yu Chen
We present DiffXPBD, a novel and efficient analytical formulation for the differentiable position-based simulation of compliant constrained dynamics (XPBD). Our proposed method allows computation of gradients of numerous parameters with respect to a goal function simultaneously leveraging a performant simulation model. The method is efficient, thus enabling differentiable simulations of high resolution geometries and degrees of freedom (DoFs). Collisions are naturally included in the framework. Our differentiable model allows a user to easily add additional optimization variables. Every control variable gradient requires the computation of only a few partial derivatives which can be computed using automatic differentiation code. We demonstrate the efficacy of the method with examples such as elastic cloth and volumetric material parameter estimation, initial value optimization, optimizing for underlying body shape and pose by only observing the clothing, and optimizing a time-varying external force sequence to match sparse keyframe shapes at specific times. Our approach demonstrates excellent efficiency and we demonstrate this on high resolution meshes with optimizations involving over 26 million degrees of freedom. Making an existing solver differentiable requires only a few modifications and the model is compatible with both modern CPU and GPU multi-core hardware.
本文提出了柔性约束动力学(XPBD)可微位置仿真的一种新颖有效的解析公式DiffXPBD。我们提出的方法允许计算相对于目标函数的多个参数的梯度,同时利用性能模拟模型。该方法是有效的,因此可以实现高分辨率几何形状和自由度(DoFs)的微分模拟。冲突自然包含在框架中。我们的可微分模型允许用户轻松地添加额外的优化变量。每个控制变量梯度只需要计算几个偏导数,这些偏导数可以用自动微分代码计算。我们通过弹性布料和体积材料参数估计、初始值优化、仅通过观察服装来优化潜在的身体形状和姿势、优化时变外力序列以匹配特定时间的稀疏关键帧形状等实例证明了该方法的有效性。我们的方法展示了卓越的效率,我们在涉及超过2600万个自由度的高分辨率网格优化上证明了这一点。使现有的求解器可微只需要少量修改,并且该模型兼容现代CPU和GPU多核硬件。
{"title":"DiffXPBD","authors":"Tuur Stuyck, Hsiao-yu Chen","doi":"10.1145/3606923","DOIUrl":"https://doi.org/10.1145/3606923","url":null,"abstract":"We present DiffXPBD, a novel and efficient analytical formulation for the differentiable position-based simulation of compliant constrained dynamics (XPBD). Our proposed method allows computation of gradients of numerous parameters with respect to a goal function simultaneously leveraging a performant simulation model. The method is efficient, thus enabling differentiable simulations of high resolution geometries and degrees of freedom (DoFs). Collisions are naturally included in the framework. Our differentiable model allows a user to easily add additional optimization variables. Every control variable gradient requires the computation of only a few partial derivatives which can be computed using automatic differentiation code. We demonstrate the efficacy of the method with examples such as elastic cloth and volumetric material parameter estimation, initial value optimization, optimizing for underlying body shape and pose by only observing the clothing, and optimizing a time-varying external force sequence to match sparse keyframe shapes at specific times. Our approach demonstrates excellent efficiency and we demonstrate this on high resolution meshes with optimizations involving over 26 million degrees of freedom. Making an existing solver differentiable requires only a few modifications and the model is compatible with both modern CPU and GPU multi-core hardware.","PeriodicalId":74536,"journal":{"name":"Proceedings of the ACM on computer graphics and interactive techniques","volume":"6 1","pages":"1 - 14"},"PeriodicalIF":0.0,"publicationDate":"2023-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48974283","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
NeRFshop: Interactive Editing of Neural Radiance Fields NeRFshop:神经辐射场的交互式编辑
Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2023-01-01 DOI: 10.1145/3585499
Clément Jambon, B. Kerbl, Georgios Kopanas, Stavros Diolatzis, Thomas Leimkühler, G. Drettakis
Neural Radiance Fields (NeRFs) have revolutionized novel view synthesis for captured scenes, with recent methods allowing interactive free-viewpoint navigation and fast training for scene reconstruction. However, the implicit representations used by these methods—often including neural networks and complex encodings— make them difficult to edit. Some initial methods have been proposed, but they suffer from limited editing capabilities and/or from a lack of interactivity, and are thus unsuitable for interactive editing of captured scenes. We tackle both limitations and introduce NeRFshop, a novel end-to-end method that allows users to interactively select and deform objects through cage-based transformations. NeRFshop provides fine scribble-based user control for the selection of regions or objects to edit, semi-automatic cage creation, and interactive volumetric manipulation of scene content thanks to our GPU-friendly two-level interpolation scheme. Further, we introduce a preliminary approach that reduces potential resulting artifacts of these transformations with a volumetric membrane interpolation technique inspired by Poisson image editing and provide a process that “distills” the edits into a standalone NeRF representation.
神经辐射场(nerf)已经彻底改变了捕获场景的新颖视图合成,最近的方法允许交互式自由视点导航和快速训练场景重建。然而,这些方法使用的隐式表示——通常包括神经网络和复杂的编码——使它们难以编辑。已经提出了一些最初的方法,但它们受到编辑能力和/或缺乏交互性的限制,因此不适合对捕获的场景进行交互式编辑。我们解决了这两个限制并引入了NeRFshop,这是一种新颖的端到端方法,允许用户通过基于笼子的转换交互式地选择和变形对象。NeRFshop提供了精细的基于涂鸦的用户控制,用于选择要编辑的区域或对象,半自动笼子创建,以及场景内容的交互式体积操作,这要归功于我们的gpu友好的两级插值方案。此外,我们引入了一种初步方法,该方法通过受泊松图像编辑启发的体积膜插值技术减少了这些转换的潜在结果伪影,并提供了一个将编辑“提炼”成独立NeRF表示的过程。
{"title":"NeRFshop: Interactive Editing of Neural Radiance Fields","authors":"Clément Jambon, B. Kerbl, Georgios Kopanas, Stavros Diolatzis, Thomas Leimkühler, G. Drettakis","doi":"10.1145/3585499","DOIUrl":"https://doi.org/10.1145/3585499","url":null,"abstract":"Neural Radiance Fields (NeRFs) have revolutionized novel view synthesis for captured scenes, with recent methods allowing interactive free-viewpoint navigation and fast training for scene reconstruction. However, the implicit representations used by these methods—often including neural networks and complex encodings— make them difficult to edit. Some initial methods have been proposed, but they suffer from limited editing capabilities and/or from a lack of interactivity, and are thus unsuitable for interactive editing of captured scenes. We tackle both limitations and introduce NeRFshop, a novel end-to-end method that allows users to interactively select and deform objects through cage-based transformations. NeRFshop provides fine scribble-based user control for the selection of regions or objects to edit, semi-automatic cage creation, and interactive volumetric manipulation of scene content thanks to our GPU-friendly two-level interpolation scheme. Further, we introduce a preliminary approach that reduces potential resulting artifacts of these transformations with a volumetric membrane interpolation technique inspired by Poisson image editing and provide a process that “distills” the edits into a standalone NeRF representation.","PeriodicalId":74536,"journal":{"name":"Proceedings of the ACM on computer graphics and interactive techniques","volume":"6 1","pages":"1:1-1:21"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64067887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
期刊
Proceedings of the ACM on computer graphics and interactive techniques
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1