首页 > 最新文献

ACM Transactions on Graphics (TOG)最新文献

英文 中文
Capturing Animation-Ready Isotropic Materials Using Systematic Poking 利用系统戳穿技术捕捉动画准备就绪的各向同性材料
Pub Date : 2023-12-04 DOI: 10.1145/3618406
Huanyu Chen, Danyong Zhao, J. Barbič
Capturing material properties of real-world elastic solids is both challenging and highly relevant to many applications in computer graphics, robotics and related fields. We give a non-intrusive, in-situ and inexpensive approach to measure the nonlinear elastic energy density function of man-made materials and biological tissues. We poke the elastic object with 3d-printed rigid cylinders of known radii, and use a precision force meter to record the contact force as a function of the indentation depth, which we measure using a force meter stand, or a novel unconstrained laser setup. We model the 3D elastic solid using the Finite Element Method (FEM), and elastic energy using a compressible Valanis-Landel material that generalizes Neo-Hookean materials by permitting arbitrary tensile behavior under large deformations. We then use optimization to fit the nonlinear isotropic elastic energy so that the FEM contact forces and indentations match their measured real-world counterparts. Because we use carefully designed cubic splines, our materials are accurate in a large range of stretches and robust to inversions, and are therefore "animation-ready" for computer graphics applications. We demonstrate how to exploit radial symmetry to convert the 3D elastostatic contact problem to the mathematically equivalent 2D problem, which vastly accelerates optimization. We also greatly improve the theory and robustness of stretch-based elastic materials, by giving a simple and elegant formula to compute the tangent stiffness matrix, with rigorous proofs and singularity handling. We also contribute the observation that volume compressibility can be estimated by poking with rigid cylinders of different radii, which avoids optical cameras and greatly simplifies experiments. We validate our method by performing full 3D simulations using the optimized materials and confirming that they match real-world forces, indentations and real deformed 3D shapes. We also validate it using a "Shore 00" durometer, a standard device for measuring material hardness.
捕捉真实世界弹性固体的材料特性既具有挑战性,又与计算机图形学、机器人技术和相关领域的许多应用高度相关。我们提出了一种非侵入式、原位和廉价的方法来测量人造材料和生物组织的非线性弹性能量密度函数。我们用已知半径的3d打印刚性圆柱体戳戳弹性物体,并使用精密力计记录接触力作为压痕深度的函数,我们使用力计支架或新型无约束激光装置进行测量。我们使用有限元方法(FEM)建模三维弹性固体,并使用可压缩Valanis-Landel材料建模弹性能量,该材料通过允许大变形下的任意拉伸行为来推广新胡克材料。然后,我们使用优化来拟合非线性各向同性弹性能,使有限元接触力和压痕与实际测量值相匹配。因为我们使用精心设计的三次样条,所以我们的材料在大范围的拉伸和反转中都是准确的,因此对于计算机图形应用程序来说是“动画就绪”的。我们演示了如何利用径向对称性将三维弹性静力接触问题转换为数学上等效的二维问题,从而大大加速了优化。我们还通过给出一个简单而优雅的公式来计算切线刚度矩阵,并通过严格的证明和奇点处理,大大提高了基于拉伸的弹性材料的理论和鲁棒性。我们还观察到,用不同半径的刚性圆柱体戳戳可以估计体积的可压缩性,这避免了光学相机,大大简化了实验。我们通过使用优化的材料进行全3D模拟来验证我们的方法,并确认它们与现实世界的力、压痕和真实变形的3D形状相匹配。我们还使用测量材料硬度的标准设备“Shore 00”硬度计进行验证。
{"title":"Capturing Animation-Ready Isotropic Materials Using Systematic Poking","authors":"Huanyu Chen, Danyong Zhao, J. Barbič","doi":"10.1145/3618406","DOIUrl":"https://doi.org/10.1145/3618406","url":null,"abstract":"Capturing material properties of real-world elastic solids is both challenging and highly relevant to many applications in computer graphics, robotics and related fields. We give a non-intrusive, in-situ and inexpensive approach to measure the nonlinear elastic energy density function of man-made materials and biological tissues. We poke the elastic object with 3d-printed rigid cylinders of known radii, and use a precision force meter to record the contact force as a function of the indentation depth, which we measure using a force meter stand, or a novel unconstrained laser setup. We model the 3D elastic solid using the Finite Element Method (FEM), and elastic energy using a compressible Valanis-Landel material that generalizes Neo-Hookean materials by permitting arbitrary tensile behavior under large deformations. We then use optimization to fit the nonlinear isotropic elastic energy so that the FEM contact forces and indentations match their measured real-world counterparts. Because we use carefully designed cubic splines, our materials are accurate in a large range of stretches and robust to inversions, and are therefore \"animation-ready\" for computer graphics applications. We demonstrate how to exploit radial symmetry to convert the 3D elastostatic contact problem to the mathematically equivalent 2D problem, which vastly accelerates optimization. We also greatly improve the theory and robustness of stretch-based elastic materials, by giving a simple and elegant formula to compute the tangent stiffness matrix, with rigorous proofs and singularity handling. We also contribute the observation that volume compressibility can be estimated by poking with rigid cylinders of different radii, which avoids optical cameras and greatly simplifies experiments. We validate our method by performing full 3D simulations using the optimized materials and confirming that they match real-world forces, indentations and real deformed 3D shapes. We also validate it using a \"Shore 00\" durometer, a standard device for measuring material hardness.","PeriodicalId":7077,"journal":{"name":"ACM Transactions on Graphics (TOG)","volume":"71 2","pages":"1 - 27"},"PeriodicalIF":0.0,"publicationDate":"2023-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138604764","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fluid Simulation on Neural Flow Maps 神经流动图上的流体模拟
Pub Date : 2023-12-04 DOI: 10.1145/3618392
Yitong Deng, Hong-Xing Yu, Diyang Zhang, Jiajun Wu, Bo Zhu
We introduce Neural Flow Maps, a novel simulation method bridging the emerging paradigm of implicit neural representations with fluid simulation based on the theory of flow maps, to achieve state-of-the-art simulation of in-viscid fluid phenomena. We devise a novel hybrid neural field representation, Spatially Sparse Neural Fields (SSNF), which fuses small neural networks with a pyramid of overlapping, multi-resolution, and spatially sparse grids, to compactly represent long-term spatiotemporal velocity fields at high accuracy. With this neural velocity buffer in hand, we compute long-term, bidirectional flow maps and their Jacobians in a mechanistically symmetric manner, to facilitate drastic accuracy improvement over existing solutions. These long-range, bidirectional flow maps enable high advection accuracy with low dissipation, which in turn facilitates high-fidelity incompressible flow simulations that manifest intricate vortical structures. We demonstrate the efficacy of our neural fluid simulation in a variety of challenging simulation scenarios, including leapfrogging vortices, colliding vortices, vortex reconnections, as well as vortex generation from moving obstacles and density differences. Our examples show increased performance over existing methods in terms of energy conservation, visual complexity, adherence to experimental observations, and preservation of detailed vortical structures.
我们介绍了一种新的模拟方法——神经流图,它将隐式神经表示与基于流图理论的流体模拟结合起来,以实现最先进的非粘性流体现象的模拟。我们设计了一种新的混合神经场表示,空间稀疏神经场(SSNF),它融合了具有重叠,多分辨率和空间稀疏网格的金字塔的小型神经网络,以高精度紧凑地表示长期时空速度场。有了这个神经速度缓冲,我们以机械对称的方式计算长期的双向流图及其雅可比矩阵,从而大大提高了现有解决方案的精度。这些远程的双向流图可以实现高平流精度和低耗散,从而促进高保真的不可压缩流动模拟,以显示复杂的旋涡结构。我们在各种具有挑战性的模拟场景中证明了我们的神经流体模拟的有效性,包括跨越式涡流、碰撞涡流、涡流重连以及由移动障碍物和密度差异产生的涡流。我们的例子表明,在节能、视觉复杂性、对实验观察的依从性和保留详细的垂直结构方面,比现有方法的性能有所提高。
{"title":"Fluid Simulation on Neural Flow Maps","authors":"Yitong Deng, Hong-Xing Yu, Diyang Zhang, Jiajun Wu, Bo Zhu","doi":"10.1145/3618392","DOIUrl":"https://doi.org/10.1145/3618392","url":null,"abstract":"We introduce Neural Flow Maps, a novel simulation method bridging the emerging paradigm of implicit neural representations with fluid simulation based on the theory of flow maps, to achieve state-of-the-art simulation of in-viscid fluid phenomena. We devise a novel hybrid neural field representation, Spatially Sparse Neural Fields (SSNF), which fuses small neural networks with a pyramid of overlapping, multi-resolution, and spatially sparse grids, to compactly represent long-term spatiotemporal velocity fields at high accuracy. With this neural velocity buffer in hand, we compute long-term, bidirectional flow maps and their Jacobians in a mechanistically symmetric manner, to facilitate drastic accuracy improvement over existing solutions. These long-range, bidirectional flow maps enable high advection accuracy with low dissipation, which in turn facilitates high-fidelity incompressible flow simulations that manifest intricate vortical structures. We demonstrate the efficacy of our neural fluid simulation in a variety of challenging simulation scenarios, including leapfrogging vortices, colliding vortices, vortex reconnections, as well as vortex generation from moving obstacles and density differences. Our examples show increased performance over existing methods in terms of energy conservation, visual complexity, adherence to experimental observations, and preservation of detailed vortical structures.","PeriodicalId":7077,"journal":{"name":"ACM Transactions on Graphics (TOG)","volume":"42 24","pages":"1 - 21"},"PeriodicalIF":0.0,"publicationDate":"2023-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138602205","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Unified λ-subdivision Scheme for Quadrilateral Meshes with Optimal Curvature Performance in Extraordinary Regions 非凡区域曲率性能最佳的四边形网格统一 λ 细分方案
Pub Date : 2023-12-04 DOI: 10.1145/3618400
Weiyin Ma, Xu Wang, Yue Ma
We propose an unified λ-subdivision scheme with a continuous family of tuned subdivisions for quadrilateral meshes. Main subdivision stencil parameters of the unified scheme are represented as spline functions of the subdominant eigenvalue λ of respective subdivision matrices and the λ value can be selected within a wide range to produce desired properties of refined meshes and limit surfaces with optimal curvature performance in extraordinary regions. Spline representations of stencil parameters are constructed based on discrete optimized stencil coefficients obtained by a general tuning framework that optimizes eigenvectors of subdivision matrices towards curvature continuity conditions. To further improve the quality of limit surfaces, a weighting function is devised to penalize sign changes of Gauss curvatures on respective second order characteristic maps. By selecting an appropriate λ, the resulting unified subdivision scheme produces anticipated properties towards different target applications, including nice properties of several other existing tuned subdivision schemes. Comparison results also validate the advantage of the proposed scheme with higher quality surfaces for subdivision at lower λ values, a challenging task for other related tuned subdivision schemes.
针对四边形网格,提出了一种具有连续调优细分族的统一λ-细分方案。统一方案的主要细分模板参数表示为各自细分矩阵的次优势特征值λ的样条函数,λ值可以在很宽的范围内选择,以在特殊区域产生理想的细化网格和曲率性能最优的极限曲面。通过对细分矩阵特征向量进行曲率连续条件优化的通用调优框架,得到离散优化的模板系数,并以此为基础构造模板参数的样条表示。为了进一步提高极限曲面的质量,设计了一个加权函数来惩罚高斯曲率在各自二阶特征映射上的符号变化。通过选择合适的λ,所得到的统一细分方案产生针对不同目标应用程序的预期属性,包括其他几种现有调优细分方案的良好属性。对比结果还验证了该方案在较低λ值下具有更高质量曲面的细分优势,这对于其他相关的调谐细分方案来说是一项具有挑战性的任务。
{"title":"An Unified λ-subdivision Scheme for Quadrilateral Meshes with Optimal Curvature Performance in Extraordinary Regions","authors":"Weiyin Ma, Xu Wang, Yue Ma","doi":"10.1145/3618400","DOIUrl":"https://doi.org/10.1145/3618400","url":null,"abstract":"We propose an unified λ-subdivision scheme with a continuous family of tuned subdivisions for quadrilateral meshes. Main subdivision stencil parameters of the unified scheme are represented as spline functions of the subdominant eigenvalue λ of respective subdivision matrices and the λ value can be selected within a wide range to produce desired properties of refined meshes and limit surfaces with optimal curvature performance in extraordinary regions. Spline representations of stencil parameters are constructed based on discrete optimized stencil coefficients obtained by a general tuning framework that optimizes eigenvectors of subdivision matrices towards curvature continuity conditions. To further improve the quality of limit surfaces, a weighting function is devised to penalize sign changes of Gauss curvatures on respective second order characteristic maps. By selecting an appropriate λ, the resulting unified subdivision scheme produces anticipated properties towards different target applications, including nice properties of several other existing tuned subdivision schemes. Comparison results also validate the advantage of the proposed scheme with higher quality surfaces for subdivision at lower λ values, a challenging task for other related tuned subdivision schemes.","PeriodicalId":7077,"journal":{"name":"ACM Transactions on Graphics (TOG)","volume":"33 10","pages":"1 - 15"},"PeriodicalIF":0.0,"publicationDate":"2023-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138602662","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Scene-Aware Activity Program Generation with Language Guidance 通过语言引导生成场景感知活动程序
Pub Date : 2023-12-04 DOI: 10.1145/3618338
Zejia Su, Qingnan Fan, Xuelin Chen, Oliver van Kaick, Hui Huang, Ruizhen Hu
We address the problem of scene-aware activity program generation, which requires decomposing a given activity task into instructions that can be sequentially performed within a target scene to complete the activity. While existing methods have shown the ability to generate rational or executable programs, generating programs with both high rationality and executability still remains a challenge. Hence, we propose a novel method where the key idea is to explicitly combine the language rationality of a powerful language model with dynamic perception of the target scene where instructions are executed, to generate programs with high rationality and executability. Our method iteratively generates instructions for the activity program. Specifically, a two-branch feature encoder operates on a language-based and graph-based representation of the current generation progress to extract language features and scene graph features, respectively. These features are then used by a predictor to generate the next instruction in the program. Subsequently, another module performs the predicted action and updates the scene for perception in the next iteration. Extensive evaluations are conducted on the VirtualHome-Env dataset, showing the advantages of our method over previous work. Key algorithmic designs are validated through ablation studies, and results on other types of inputs are also presented to show the generalizability of our method.
我们解决了场景感知活动程序生成的问题,这需要将给定的活动任务分解为可以在目标场景中顺序执行的指令来完成活动。虽然现有的方法已经显示出生成合理或可执行程序的能力,但是生成既具有高合理性又具有高可执行性的程序仍然是一个挑战。因此,我们提出了一种新颖的方法,其关键思想是将强大的语言模型的语言合理性与执行指令的目标场景的动态感知显式结合起来,以生成具有高合理性和可执行性的程序。我们的方法迭代地为活动程序生成指令。具体来说,双分支特征编码器分别对当前生成过程的基于语言和基于图的表示进行操作,以提取语言特征和场景图特征。然后,预测器使用这些特征来生成程序中的下一条指令。随后,另一个模块执行预测的动作,并在下一次迭代中更新场景以供感知。在VirtualHome-Env数据集上进行了广泛的评估,显示了我们的方法比以前的工作的优势。通过烧蚀研究验证了关键算法设计,并给出了其他类型输入的结果,以显示我们的方法的泛化性。
{"title":"Scene-Aware Activity Program Generation with Language Guidance","authors":"Zejia Su, Qingnan Fan, Xuelin Chen, Oliver van Kaick, Hui Huang, Ruizhen Hu","doi":"10.1145/3618338","DOIUrl":"https://doi.org/10.1145/3618338","url":null,"abstract":"We address the problem of scene-aware activity program generation, which requires decomposing a given activity task into instructions that can be sequentially performed within a target scene to complete the activity. While existing methods have shown the ability to generate rational or executable programs, generating programs with both high rationality and executability still remains a challenge. Hence, we propose a novel method where the key idea is to explicitly combine the language rationality of a powerful language model with dynamic perception of the target scene where instructions are executed, to generate programs with high rationality and executability. Our method iteratively generates instructions for the activity program. Specifically, a two-branch feature encoder operates on a language-based and graph-based representation of the current generation progress to extract language features and scene graph features, respectively. These features are then used by a predictor to generate the next instruction in the program. Subsequently, another module performs the predicted action and updates the scene for perception in the next iteration. Extensive evaluations are conducted on the VirtualHome-Env dataset, showing the advantages of our method over previous work. Key algorithmic designs are validated through ablation studies, and results on other types of inputs are also presented to show the generalizability of our method.","PeriodicalId":7077,"journal":{"name":"ACM Transactions on Graphics (TOG)","volume":"22 18","pages":"1 - 16"},"PeriodicalIF":0.0,"publicationDate":"2023-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138603213","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Commonsense Knowledge-Driven Joint Reasoning Approach for Object Retrieval in Virtual Reality 虚拟现实中物体检索的常识知识驱动联合推理方法
Pub Date : 2023-12-04 DOI: 10.1145/3618320
Haiyan Jiang, Dongdong Weng, Xiaonuo Dongye, Le Luo, Zhenliang Zhang
National Key Laboratory of General Artificial Intelligence, Beijing Institute for General Artificial Intelligence (BIGAI), China Retrieving out-of-reach objects is a crucial task in virtual reality (VR). One of the most commonly used approaches for this task is the gesture-based approach, which allows for bare-hand, eyes-free, and direct retrieval. However, previous work has primarily focused on assigned gesture design, neglecting the context. This can make it challenging to accurately retrieve an object from a large number of objects due to the one-to-one mapping metaphor, limitations of finger poses, and memory burdens. There is a general consensus that objects and contexts are related, which suggests that the object expected to be retrieved is related to the context, including the scene and the objects with which users interact. As such, we propose a commonsense knowledge-driven joint reasoning approach for object retrieval, where human grasping gestures and context are modeled using an And-Or graph (AOG). This approach enables users to accurately retrieve objects from a large number of candidate objects by using natural grasping gestures based on their experience of grasping physical objects. Experimental results demonstrate that our proposed approach improves retrieval accuracy. We also propose an object retrieval system based on the proposed approach. Two user studies show that our system enables efficient object retrieval in virtual environments (VEs).
检索遥不可及物体是虚拟现实(VR)中的一项关键任务。这项任务最常用的方法之一是基于手势的方法,它允许徒手、无眼和直接检索。然而,之前的工作主要集中在指定的手势设计上,而忽略了上下文。由于一对一的映射比喻、手指姿势的限制和内存负担,这使得从大量对象中准确检索对象变得具有挑战性。人们普遍认为对象和上下文是相关的,这表明期望检索的对象与上下文相关,包括场景和用户与之交互的对象。因此,我们提出了一种常识知识驱动的对象检索联合推理方法,其中人类抓取手势和上下文使用and - or图(AOG)建模。该方法使用户能够根据抓取物理对象的经验,通过使用自然抓取手势,从大量候选对象中准确地检索对象。实验结果表明,该方法提高了检索精度。并在此基础上提出了一个对象检索系统。两项用户研究表明,我们的系统能够在虚拟环境(VEs)中实现有效的对象检索。
{"title":"Commonsense Knowledge-Driven Joint Reasoning Approach for Object Retrieval in Virtual Reality","authors":"Haiyan Jiang, Dongdong Weng, Xiaonuo Dongye, Le Luo, Zhenliang Zhang","doi":"10.1145/3618320","DOIUrl":"https://doi.org/10.1145/3618320","url":null,"abstract":"National Key Laboratory of General Artificial Intelligence, Beijing Institute for General Artificial Intelligence (BIGAI), China Retrieving out-of-reach objects is a crucial task in virtual reality (VR). One of the most commonly used approaches for this task is the gesture-based approach, which allows for bare-hand, eyes-free, and direct retrieval. However, previous work has primarily focused on assigned gesture design, neglecting the context. This can make it challenging to accurately retrieve an object from a large number of objects due to the one-to-one mapping metaphor, limitations of finger poses, and memory burdens. There is a general consensus that objects and contexts are related, which suggests that the object expected to be retrieved is related to the context, including the scene and the objects with which users interact. As such, we propose a commonsense knowledge-driven joint reasoning approach for object retrieval, where human grasping gestures and context are modeled using an And-Or graph (AOG). This approach enables users to accurately retrieve objects from a large number of candidate objects by using natural grasping gestures based on their experience of grasping physical objects. Experimental results demonstrate that our proposed approach improves retrieval accuracy. We also propose an object retrieval system based on the proposed approach. Two user studies show that our system enables efficient object retrieval in virtual environments (VEs).","PeriodicalId":7077,"journal":{"name":"ACM Transactions on Graphics (TOG)","volume":"73 22","pages":"1 - 18"},"PeriodicalIF":0.0,"publicationDate":"2023-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138604755","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient Cone Singularity Construction for Conformal Parameterizations 共形参数化的高效圆锥奇点构造
Pub Date : 2023-12-04 DOI: 10.1145/3618407
Mo Li, Qing Fang, Zheng Zhang, Ligang Liu, Xiao-Ming Fu
We propose an efficient method to construct sparse cone singularities under distortion-bounded constraints for conformal parameterizations. Central to our algorithm is using the technique of shape derivatives to move cones for distortion reduction without changing the number of cones. In particular, the supernodal sparse Cholesky update significantly accelerates this movement process. To satisfy the distortion-bounded constraint, we alternately move cones and add cones. The capability and feasibility of our approach are demonstrated over a data set containing 3885 models. Compared with the state-of-the-art method, we achieve an average acceleration of 15 times and slightly fewer cones for the same amount of distortion.
我们提出了一种有效的方法来构造保形参数化的畸变有界约束下的稀疏锥奇点。我们算法的核心是使用形状导数技术来移动锥体以减少畸变,而不改变锥体的数量。特别是,超节稀疏Cholesky更新显著地加速了这一运动过程。为了满足扭曲有界约束,我们交替地移动和添加锥体。在包含3885个模型的数据集上验证了我们方法的能力和可行性。与最先进的方法相比,我们实现了15倍的平均加速度和更少的锥体对于相同数量的失真。
{"title":"Efficient Cone Singularity Construction for Conformal Parameterizations","authors":"Mo Li, Qing Fang, Zheng Zhang, Ligang Liu, Xiao-Ming Fu","doi":"10.1145/3618407","DOIUrl":"https://doi.org/10.1145/3618407","url":null,"abstract":"We propose an efficient method to construct sparse cone singularities under distortion-bounded constraints for conformal parameterizations. Central to our algorithm is using the technique of shape derivatives to move cones for distortion reduction without changing the number of cones. In particular, the supernodal sparse Cholesky update significantly accelerates this movement process. To satisfy the distortion-bounded constraint, we alternately move cones and add cones. The capability and feasibility of our approach are demonstrated over a data set containing 3885 models. Compared with the state-of-the-art method, we achieve an average acceleration of 15 times and slightly fewer cones for the same amount of distortion.","PeriodicalId":7077,"journal":{"name":"ACM Transactions on Graphics (TOG)","volume":"65 17","pages":"1 - 13"},"PeriodicalIF":0.0,"publicationDate":"2023-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138604865","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Amortizing Samples in Physics-Based Inverse Rendering Using ReSTIR 使用 ReSTIR 在基于物理的反渲染中摊销样本
Pub Date : 2023-12-04 DOI: 10.1145/3618331
YU-CHEN Wang, Chris Wyman, Lifan Wu, Shuang Zhao
Recently, great progress has been made in physics-based differentiable rendering. Existing differentiable rendering techniques typically focus on static scenes, but during inverse rendering---a key application for differentiable rendering---the scene is updated dynamically by each gradient step. In this paper, we take a first step to leverage temporal data in the context of inverse direct illumination. By adopting reservoir-based spatiotemporal resampled importance resampling (ReSTIR), we introduce new Monte Carlo estimators for both interior and boundary components of differential direct illumination integrals. We also integrate ReSTIR with antithetic sampling to further improve its effectiveness. At equal frame time, our methods produce gradient estimates with up to 100× lower relative error than baseline methods. Additionally, we propose an inverse-rendering pipeline that incorporates these estimators and provides reconstructions with up to 20× lower error.
近年来,基于物理的可微分渲染取得了很大的进展。现有的可微分渲染技术通常侧重于静态场景,但在逆向渲染中——可微分渲染的一个关键应用——场景通过每个梯度步骤动态更新。在本文中,我们迈出了第一步,在反向直接照明的背景下利用时间数据。采用基于储层的时空重采样重要性重采样(ReSTIR)方法,对微分直接照明积分的内部分量和边界分量引入新的蒙特卡罗估计。我们还将restr与反相采样相结合,进一步提高其有效性。在相同的帧时间下,我们的方法产生的梯度估计比基线方法的相对误差低100倍。此外,我们提出了一个包含这些估计器的反向渲染管道,并提供了高达20倍的低误差重建。
{"title":"Amortizing Samples in Physics-Based Inverse Rendering Using ReSTIR","authors":"YU-CHEN Wang, Chris Wyman, Lifan Wu, Shuang Zhao","doi":"10.1145/3618331","DOIUrl":"https://doi.org/10.1145/3618331","url":null,"abstract":"Recently, great progress has been made in physics-based differentiable rendering. Existing differentiable rendering techniques typically focus on static scenes, but during inverse rendering---a key application for differentiable rendering---the scene is updated dynamically by each gradient step. In this paper, we take a first step to leverage temporal data in the context of inverse direct illumination. By adopting reservoir-based spatiotemporal resampled importance resampling (ReSTIR), we introduce new Monte Carlo estimators for both interior and boundary components of differential direct illumination integrals. We also integrate ReSTIR with antithetic sampling to further improve its effectiveness. At equal frame time, our methods produce gradient estimates with up to 100× lower relative error than baseline methods. Additionally, we propose an inverse-rendering pipeline that incorporates these estimators and provides reconstructions with up to 20× lower error.","PeriodicalId":7077,"journal":{"name":"ACM Transactions on Graphics (TOG)","volume":"20 4","pages":"1 - 17"},"PeriodicalIF":0.0,"publicationDate":"2023-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138601711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Differentiable Rendering of Parametric Geometry 参数几何的可微分渲染
Pub Date : 2023-12-04 DOI: 10.1145/3618387
Markus Worchel, Marc Alexa
We propose an efficient method for differentiable rendering of parametric surfaces and curves, which enables their use in inverse graphics problems. Our central observation is that a representative triangle mesh can be extracted from a continuous parametric object in a differentiable and efficient way. We derive differentiable meshing operators for surfaces and curves that provide varying levels of approximation granularity. With triangle mesh approximations, we can readily leverage existing machinery for differentiable mesh rendering to handle parametric geometry. Naively combining differentiable tessellation with inverse graphics settings lacks robustness and is prone to reaching undesirable local minima. To this end, we draw a connection between our setting and the optimization of triangle meshes in inverse graphics and present a set of optimization techniques, including regularizations and coarse-to-fine schemes. We show the viability and efficiency of our method in a set of image-based computer-aided design applications.
我们提出了一种有效的参数曲面和曲线的可微绘制方法,使它们能够用于反图形问题。我们的中心观察是一个代表性的三角形网格可以从一个连续的参数对象中以一种可微和有效的方式提取。我们为曲面和曲线导出了可微的网格运算符,提供了不同级别的近似粒度。通过三角网格近似,我们可以很容易地利用现有的可微网格渲染机制来处理参数几何。天真地将可微镶嵌与逆图形设置相结合缺乏鲁棒性,并且容易达到不希望的局部最小值。为此,我们将我们的设置与逆图形中三角形网格的优化联系起来,并提出了一套优化技术,包括正则化和粗到精方案。我们在一组基于图像的计算机辅助设计应用中展示了我们方法的可行性和效率。
{"title":"Differentiable Rendering of Parametric Geometry","authors":"Markus Worchel, Marc Alexa","doi":"10.1145/3618387","DOIUrl":"https://doi.org/10.1145/3618387","url":null,"abstract":"We propose an efficient method for differentiable rendering of parametric surfaces and curves, which enables their use in inverse graphics problems. Our central observation is that a representative triangle mesh can be extracted from a continuous parametric object in a differentiable and efficient way. We derive differentiable meshing operators for surfaces and curves that provide varying levels of approximation granularity. With triangle mesh approximations, we can readily leverage existing machinery for differentiable mesh rendering to handle parametric geometry. Naively combining differentiable tessellation with inverse graphics settings lacks robustness and is prone to reaching undesirable local minima. To this end, we draw a connection between our setting and the optimization of triangle meshes in inverse graphics and present a set of optimization techniques, including regularizations and coarse-to-fine schemes. We show the viability and efficiency of our method in a set of image-based computer-aided design applications.","PeriodicalId":7077,"journal":{"name":"ACM Transactions on Graphics (TOG)","volume":"28 21","pages":"1 - 18"},"PeriodicalIF":0.0,"publicationDate":"2023-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138602443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
OpenSVBRDF: A Database of Measured Spatially-Varying Reflectance OpenSVBRDF:空间变化反射率测量数据库
Pub Date : 2023-12-04 DOI: 10.1145/3618358
Xiaohe Ma, Xianmin Xu, Leyao Zhang, Kun Zhou, Hongzhi Wu
We present the first large-scale database of measured spatially-varying anisotropic reflectance, consisting of 1,000 high-quality near-planar SVBRDFs, spanning 9 material categories such as wood, fabric and metal. Each sample is captured in 15 minutes, and represented as a set of high-resolution texture maps that correspond to spatially-varying BRDF parameters and local frames. To build this database, we develop a novel integrated system for robust, high-quality and -efficiency reflectance acquisition and reconstruction. Our setup consists of 2 cameras and 16,384 LEDs. We train 64 lighting patterns for efficient acquisition, in conjunction with a network that predicts per-point reflectance in a neural representation from carefully aligned two-view measurements captured under the patterns. The intermediate results are further fine-tuned with respect to the photographs acquired under 63 effective linear lights, and finally fitted to a BRDF model. We report various statistics of the database, and demonstrate its value in the applications of material generation, classification as well as sampling. All related data, including future additions to the database, can be downloaded from https://opensvbrdf.github.io/.
我们提出了第一个测量空间变化各向异性反射率的大规模数据库,由1000个高质量的近平面svbrdf组成,涵盖木材、织物和金属等9种材料类别。每个样本在15分钟内被捕获,并表示为一组高分辨率纹理图,这些纹理图对应于空间变化的BRDF参数和局部帧。为了建立这个数据库,我们开发了一个新的集成系统,用于鲁棒、高质量和高效率的反射率采集和重建。我们的设置由2个摄像头和16,384个led组成。我们训练了64种照明模式,以进行有效的采集,并结合一个网络,该网络通过在模式下捕获的仔细对齐的双视图测量来预测神经表示中的每点反射率。中间结果对在63个有效线性光下获得的照片进行进一步微调,最终拟合到BRDF模型。我们报告了数据库的各种统计数据,并展示了它在材料生成、分类和抽样方面的应用价值。所有相关数据,包括将来添加到数据库的内容,都可以从https://opensvbrdf.github.io/下载。
{"title":"OpenSVBRDF: A Database of Measured Spatially-Varying Reflectance","authors":"Xiaohe Ma, Xianmin Xu, Leyao Zhang, Kun Zhou, Hongzhi Wu","doi":"10.1145/3618358","DOIUrl":"https://doi.org/10.1145/3618358","url":null,"abstract":"We present the first large-scale database of measured spatially-varying anisotropic reflectance, consisting of 1,000 high-quality near-planar SVBRDFs, spanning 9 material categories such as wood, fabric and metal. Each sample is captured in 15 minutes, and represented as a set of high-resolution texture maps that correspond to spatially-varying BRDF parameters and local frames. To build this database, we develop a novel integrated system for robust, high-quality and -efficiency reflectance acquisition and reconstruction. Our setup consists of 2 cameras and 16,384 LEDs. We train 64 lighting patterns for efficient acquisition, in conjunction with a network that predicts per-point reflectance in a neural representation from carefully aligned two-view measurements captured under the patterns. The intermediate results are further fine-tuned with respect to the photographs acquired under 63 effective linear lights, and finally fitted to a BRDF model. We report various statistics of the database, and demonstrate its value in the applications of material generation, classification as well as sampling. All related data, including future additions to the database, can be downloaded from https://opensvbrdf.github.io/.","PeriodicalId":7077,"journal":{"name":"ACM Transactions on Graphics (TOG)","volume":"25 11","pages":"1 - 14"},"PeriodicalIF":0.0,"publicationDate":"2023-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138602772","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Computational Design of Flexible Planar Microstructures 柔性平面微结构的计算设计
Pub Date : 2023-12-04 DOI: 10.1145/3618396
Zhan Zhang, Christopher Brandt, Jean Jouve, Yue Wang, Tian Chen, Mark Pauly, Julian Panetta
Mechanical metamaterials enable customizing the elastic properties of physical objects by altering their fine-scale structure. A broad gamut of effective material properties can be produced even from a single fabrication material by optimizing the geometry of a periodic microstructure tiling. Past work has extensively studied the capabilities of microstructures in the small-displacement regime, where periodic homogenization of linear elasticity yields computationally efficient optimal design algorithms. However, many applications involve flexible structures undergoing large deformations for which the accuracy of linear elasticity rapidly deteriorates due to geometric nonlinearities. Design of microstructures at finite strains involves a massive increase in computation and is much less explored; no computational tool yet exists to design metamaterials emulating target hyperelastic laws over finite regions of strain space. We make an initial step in this direction, developing algorithms to accelerate homogenization and metamaterial design for nonlinear elasticity and building a complete framework for the optimal design of planar metamaterials. Our nonlinear homogenization method works by efficiently constructing an accurate interpolant of a microstructure's deformation over a finite space of macroscopic strains likely to be endured by the metamaterial. From this interpolant, the homogenized energy density, stress, and tangent elasticity tensor describing the microstructure's effective properties can be inexpensively computed at any strain. Our design tool then fits the effective material properties to a target constitutive law over a region of strain space using a parametric shape optimization approach, producing a directly manufacturable geometry. We systematically test our framework by designing a catalog of materials fitting isotropic Hooke's laws as closely as possible. We demonstrate significantly improved accuracy over traditional linear metamaterial design techniques by fabricating and testing physical prototypes.
机械超材料可以通过改变物理对象的精细结构来定制其弹性特性。通过优化周期性微结构平铺的几何形状,甚至可以从单一制造材料中产生广泛的有效材料性能。过去的工作广泛研究了小位移状态下微结构的能力,其中线性弹性的周期性均匀化产生了计算效率高的优化设计算法。然而,许多应用涉及到经历大变形的柔性结构,由于几何非线性,线性弹性的精度迅速恶化。有限应变下的微结构设计涉及计算量的大量增加,并且很少被探索;目前还没有计算工具来设计在应变空间有限区域上模拟目标超弹性定律的超材料。我们在这个方向上迈出了第一步,开发了加速非线性弹性均匀化和超材料设计的算法,并为平面超材料的优化设计建立了一个完整的框架。我们的非线性均匀化方法有效地构建了微观结构在有限空间内可能由超材料承受的宏观应变的变形的精确插值。通过这个插值,可以在任何应变下廉价地计算均匀化的能量密度、应力和描述微观结构有效特性的切向弹性张量。然后,我们的设计工具使用参数化形状优化方法将有效材料属性与应变空间区域的目标本构律相匹配,从而产生可直接制造的几何形状。我们通过设计尽可能接近各向同性胡克定律的材料目录来系统地测试我们的框架。通过制造和测试物理原型,我们证明了比传统线性超材料设计技术显著提高的精度。
{"title":"Computational Design of Flexible Planar Microstructures","authors":"Zhan Zhang, Christopher Brandt, Jean Jouve, Yue Wang, Tian Chen, Mark Pauly, Julian Panetta","doi":"10.1145/3618396","DOIUrl":"https://doi.org/10.1145/3618396","url":null,"abstract":"Mechanical metamaterials enable customizing the elastic properties of physical objects by altering their fine-scale structure. A broad gamut of effective material properties can be produced even from a single fabrication material by optimizing the geometry of a periodic microstructure tiling. Past work has extensively studied the capabilities of microstructures in the small-displacement regime, where periodic homogenization of linear elasticity yields computationally efficient optimal design algorithms. However, many applications involve flexible structures undergoing large deformations for which the accuracy of linear elasticity rapidly deteriorates due to geometric nonlinearities. Design of microstructures at finite strains involves a massive increase in computation and is much less explored; no computational tool yet exists to design metamaterials emulating target hyperelastic laws over finite regions of strain space. We make an initial step in this direction, developing algorithms to accelerate homogenization and metamaterial design for nonlinear elasticity and building a complete framework for the optimal design of planar metamaterials. Our nonlinear homogenization method works by efficiently constructing an accurate interpolant of a microstructure's deformation over a finite space of macroscopic strains likely to be endured by the metamaterial. From this interpolant, the homogenized energy density, stress, and tangent elasticity tensor describing the microstructure's effective properties can be inexpensively computed at any strain. Our design tool then fits the effective material properties to a target constitutive law over a region of strain space using a parametric shape optimization approach, producing a directly manufacturable geometry. We systematically test our framework by designing a catalog of materials fitting isotropic Hooke's laws as closely as possible. We demonstrate significantly improved accuracy over traditional linear metamaterial design techniques by fabricating and testing physical prototypes.","PeriodicalId":7077,"journal":{"name":"ACM Transactions on Graphics (TOG)","volume":"16 20","pages":"1 - 16"},"PeriodicalIF":0.0,"publicationDate":"2023-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138603425","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
ACM Transactions on Graphics (TOG)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1