首页 > 最新文献

arXiv - CS - Graphics最新文献

英文 中文
Few-Shot Unsupervised Implicit Neural Shape Representation Learning with Spatial Adversaries 有空间对手的少镜头无监督隐式神经形状表征学习
Pub Date : 2024-08-27 DOI: arxiv-2408.15114
Amine Ouasfi, Adnane Boukhayma
Implicit Neural Representations have gained prominence as a powerfulframework for capturing complex data modalities, encompassing a wide range from3D shapes to images and audio. Within the realm of 3D shape representation,Neural Signed Distance Functions (SDF) have demonstrated remarkable potentialin faithfully encoding intricate shape geometry. However, learning SDFs fromsparse 3D point clouds in the absence of ground truth supervision remains avery challenging task. While recent methods rely on smoothness priors toregularize the learning, our method introduces a regularization term thatleverages adversarial samples around the shape to improve the learned SDFs.Through extensive experiments and evaluations, we illustrate the efficacy ofour proposed method, highlighting its capacity to improve SDF learning withrespect to baselines and the state-of-the-art using synthetic and real data.
隐式神经表征作为捕捉复杂数据模式的强大框架,在从三维形状到图像和音频的广泛领域中,已经获得了突出的地位。在三维形状表示领域,神经签名距离函数(SDF)在忠实编码错综复杂的形状几何方面表现出了非凡的潜力。然而,在没有地面实况监督的情况下,从稀疏的三维点云中学习 SDF 仍然是一项极具挑战性的任务。通过广泛的实验和评估,我们证明了所提方法的有效性,并突出强调了该方法相对于基线和最先进方法(使用合成数据和真实数据)在改进 SDF 学习方面的能力。
{"title":"Few-Shot Unsupervised Implicit Neural Shape Representation Learning with Spatial Adversaries","authors":"Amine Ouasfi, Adnane Boukhayma","doi":"arxiv-2408.15114","DOIUrl":"https://doi.org/arxiv-2408.15114","url":null,"abstract":"Implicit Neural Representations have gained prominence as a powerful\u0000framework for capturing complex data modalities, encompassing a wide range from\u00003D shapes to images and audio. Within the realm of 3D shape representation,\u0000Neural Signed Distance Functions (SDF) have demonstrated remarkable potential\u0000in faithfully encoding intricate shape geometry. However, learning SDFs from\u0000sparse 3D point clouds in the absence of ground truth supervision remains a\u0000very challenging task. While recent methods rely on smoothness priors to\u0000regularize the learning, our method introduces a regularization term that\u0000leverages adversarial samples around the shape to improve the learned SDFs.\u0000Through extensive experiments and evaluations, we illustrate the efficacy of\u0000our proposed method, highlighting its capacity to improve SDF learning with\u0000respect to baselines and the state-of-the-art using synthetic and real data.","PeriodicalId":501174,"journal":{"name":"arXiv - CS - Graphics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142221913","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MeshUp: Multi-Target Mesh Deformation via Blended Score Distillation MeshUp:通过混合分数蒸馏实现多目标网格变形
Pub Date : 2024-08-27 DOI: arxiv-2408.14899
Hyunwoo Kim, Itai Lang, Noam Aigerman, Thibault Groueix, Vladimir G. Kim, Rana Hanocka
We propose MeshUp, a technique that deforms a 3D mesh towards multiple targetconcepts, and intuitively controls the region where each concept is expressed.Conveniently, the concepts can be defined as either text queries, e.g., "a dog"and "a turtle," or inspirational images, and the local regions can be selectedas any number of vertices on the mesh. We can effectively control the influenceof the concepts and mix them together using a novel score distillationapproach, referred to as the Blended Score Distillation (BSD). BSD operates oneach attention layer of the denoising U-Net of a diffusion model as it extractsand injects the per-objective activations into a unified denoising pipelinefrom which the deformation gradients are calculated. To localize the expressionof these activations, we create a probabilistic Region of Interest (ROI) map onthe surface of the mesh, and turn it into 3D-consistent masks that we use tocontrol the expression of these activations. We demonstrate the effectivenessof BSD empirically and show that it can deform various meshes towards multipleobjectives.
我们提出的 MeshUp 是一种针对多个目标概念对三维网格进行变形的技术,它可以直观地控制每个概念所表达的区域。我们可以使用一种新颖的分数蒸馏方法(称为混合分数蒸馏法(BSD))有效地控制概念的影响并将它们混合在一起。BSD 对扩散模型的去噪 U 网的每个注意层进行操作,因为它提取并将每个目标的激活状态注入统一的去噪管道,并从中计算出变形梯度。为了定位这些激活的表达,我们在网格表面创建了一个概率感兴趣区域(ROI)图,并将其转化为三维一致的掩码,用来控制这些激活的表达。我们通过经验证明了 BSD 的有效性,并表明它可以使各种网格向多个目标变形。
{"title":"MeshUp: Multi-Target Mesh Deformation via Blended Score Distillation","authors":"Hyunwoo Kim, Itai Lang, Noam Aigerman, Thibault Groueix, Vladimir G. Kim, Rana Hanocka","doi":"arxiv-2408.14899","DOIUrl":"https://doi.org/arxiv-2408.14899","url":null,"abstract":"We propose MeshUp, a technique that deforms a 3D mesh towards multiple target\u0000concepts, and intuitively controls the region where each concept is expressed.\u0000Conveniently, the concepts can be defined as either text queries, e.g., \"a dog\"\u0000and \"a turtle,\" or inspirational images, and the local regions can be selected\u0000as any number of vertices on the mesh. We can effectively control the influence\u0000of the concepts and mix them together using a novel score distillation\u0000approach, referred to as the Blended Score Distillation (BSD). BSD operates on\u0000each attention layer of the denoising U-Net of a diffusion model as it extracts\u0000and injects the per-objective activations into a unified denoising pipeline\u0000from which the deformation gradients are calculated. To localize the expression\u0000of these activations, we create a probabilistic Region of Interest (ROI) map on\u0000the surface of the mesh, and turn it into 3D-consistent masks that we use to\u0000control the expression of these activations. We demonstrate the effectiveness\u0000of BSD empirically and show that it can deform various meshes towards multiple\u0000objectives.","PeriodicalId":501174,"journal":{"name":"arXiv - CS - Graphics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142221916","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DynaSurfGS: Dynamic Surface Reconstruction with Planar-based Gaussian Splatting DynaSurfGS:基于平面高斯拼接的动态曲面重构
Pub Date : 2024-08-26 DOI: arxiv-2408.13972
Weiwei Cai, Weicai Ye, Peng Ye, Tong He, Tao Chen
Dynamic scene reconstruction has garnered significant attention in recentyears due to its capabilities in high-quality and real-time rendering. Amongvarious methodologies, constructing a 4D spatial-temporal representation, suchas 4D-GS, has gained popularity for its high-quality rendered images. However,these methods often produce suboptimal surfaces, as the discrete 3D Gaussianpoint clouds fail to align with the object's surface precisely. To address thisproblem, we propose DynaSurfGS to achieve both photorealistic rendering andhigh-fidelity surface reconstruction of dynamic scenarios. Specifically, theDynaSurfGS framework first incorporates Gaussian features from 4D neural voxelswith the planar-based Gaussian Splatting to facilitate precise surfacereconstruction. It leverages normal regularization to enforce the smoothness ofthe surface of dynamic objects. It also incorporates the as-rigid-as-possible(ARAP) constraint to maintain the approximate rigidity of local neighborhoodsof 3D Gaussians between timesteps and ensure that adjacent 3D Gaussians remainclosely aligned throughout. Extensive experiments demonstrate that DynaSurfGSsurpasses state-of-the-art methods in both high-fidelity surface reconstructionand photorealistic rendering.
近年来,动态场景重建因其高质量和实时渲染的能力而备受关注。在各种方法中,构建 4D 时空表示(如 4D-GS)因其高质量的渲染图像而广受欢迎。然而,由于离散的三维高斯点云无法与物体表面精确对齐,这些方法通常会产生次优表面。为了解决这个问题,我们提出了 DynaSurfGS,以实现动态场景的逼真渲染和高保真表面重建。具体来说,DynaSurfGS 框架首先将四维神经体素的高斯特征与基于平面的高斯拼接相结合,以促进精确的表面重建。它利用法线正则化来增强动态物体表面的平滑度。它还结合了尽可能刚性(ARAP)约束,以保持三维高斯局部邻域在不同时间步之间的近似刚性,并确保相邻的三维高斯始终保持紧密对齐。大量实验证明,DynaSurfGS 在高保真表面重建和逼真渲染方面都超越了最先进的方法。
{"title":"DynaSurfGS: Dynamic Surface Reconstruction with Planar-based Gaussian Splatting","authors":"Weiwei Cai, Weicai Ye, Peng Ye, Tong He, Tao Chen","doi":"arxiv-2408.13972","DOIUrl":"https://doi.org/arxiv-2408.13972","url":null,"abstract":"Dynamic scene reconstruction has garnered significant attention in recent\u0000years due to its capabilities in high-quality and real-time rendering. Among\u0000various methodologies, constructing a 4D spatial-temporal representation, such\u0000as 4D-GS, has gained popularity for its high-quality rendered images. However,\u0000these methods often produce suboptimal surfaces, as the discrete 3D Gaussian\u0000point clouds fail to align with the object's surface precisely. To address this\u0000problem, we propose DynaSurfGS to achieve both photorealistic rendering and\u0000high-fidelity surface reconstruction of dynamic scenarios. Specifically, the\u0000DynaSurfGS framework first incorporates Gaussian features from 4D neural voxels\u0000with the planar-based Gaussian Splatting to facilitate precise surface\u0000reconstruction. It leverages normal regularization to enforce the smoothness of\u0000the surface of dynamic objects. It also incorporates the as-rigid-as-possible\u0000(ARAP) constraint to maintain the approximate rigidity of local neighborhoods\u0000of 3D Gaussians between timesteps and ensure that adjacent 3D Gaussians remain\u0000closely aligned throughout. Extensive experiments demonstrate that DynaSurfGS\u0000surpasses state-of-the-art methods in both high-fidelity surface reconstruction\u0000and photorealistic rendering.","PeriodicalId":501174,"journal":{"name":"arXiv - CS - Graphics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142221918","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploiting ray tracing technology through OptiX to compute particle interactions with cutoff in a 3D environment on GPU 通过 OptiX 利用光线追踪技术,在 GPU 上计算三维环境中粒子与截止点的相互作用
Pub Date : 2024-08-26 DOI: arxiv-2408.14247
Bérenger Bramas
Computing on graphics processing units (GPUs) has become standard inscientific computing, allowing for incredible performance gains over classicalCPUs for many computational methods. As GPUs were originally designed for 3Drendering, they still have several features for that purpose that are not usedin scientific computing. Among them, ray tracing is a powerful technology usedto render 3D scenes. In this paper, we propose exploiting ray tracingtechnology to compute particle interactions with a cutoff distance in a 3Denvironment. We describe algorithmic tricks and geometric patterns to find theinteraction lists for each particle. This approach allows us to computeinteractions with quasi-linear complexity in the number of particles withoutbuilding a grid of cells or an explicit kd-tree. We compare the performance ofour approach with a classical approach based on a grid of cells and show that,currently, ours is slower in most cases but could pave the way for futuremethods.
在图形处理器(GPU)上进行计算已成为科学计算的标准,在许多计算方法中,GPU 的性能都比传统 CPU 高出许多。由于 GPU 最初是为三维渲染而设计的,因此它仍具有一些科学计算中没有使用的功能。其中,光线追踪是一项用于渲染 3D 场景的强大技术。在本文中,我们提出利用光线追踪技术来计算粒子在三维环境中与截止距离的相互作用。我们描述了为每个粒子寻找相互作用列表的算法技巧和几何模式。这种方法允许我们以粒子数量的准线性复杂度计算相互作用,而无需构建单元网格或显式 kd 树。我们比较了我们的方法和基于单元网格的经典方法的性能,结果表明,目前,我们的方法在大多数情况下速度较慢,但可以为未来的方法铺平道路。
{"title":"Exploiting ray tracing technology through OptiX to compute particle interactions with cutoff in a 3D environment on GPU","authors":"Bérenger Bramas","doi":"arxiv-2408.14247","DOIUrl":"https://doi.org/arxiv-2408.14247","url":null,"abstract":"Computing on graphics processing units (GPUs) has become standard in\u0000scientific computing, allowing for incredible performance gains over classical\u0000CPUs for many computational methods. As GPUs were originally designed for 3D\u0000rendering, they still have several features for that purpose that are not used\u0000in scientific computing. Among them, ray tracing is a powerful technology used\u0000to render 3D scenes. In this paper, we propose exploiting ray tracing\u0000technology to compute particle interactions with a cutoff distance in a 3D\u0000environment. We describe algorithmic tricks and geometric patterns to find the\u0000interaction lists for each particle. This approach allows us to compute\u0000interactions with quasi-linear complexity in the number of particles without\u0000building a grid of cells or an explicit kd-tree. We compare the performance of\u0000our approach with a classical approach based on a grid of cells and show that,\u0000currently, ours is slower in most cases but could pave the way for future\u0000methods.","PeriodicalId":501174,"journal":{"name":"arXiv - CS - Graphics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142221917","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Quantized neural network for complex hologram generation 用于生成复杂全息图的量化神经网络
Pub Date : 2024-08-25 DOI: arxiv-2409.06711
Yutaka Endo, Minoru Oikawa, Timothy D. Wilkinson, Tomoyoshi Shimobaba, Tomoyoshi Ito
Computer-generated holography (CGH) is a promising technology for augmentedreality displays, such as head-mounted or head-up displays. However, its highcomputational demand makes it impractical for implementation. Recent efforts tointegrate neural networks into CGH have successfully accelerated computingspeed, demonstrating the potential to overcome the trade-off betweencomputational cost and image quality. Nevertheless, deploying neuralnetwork-based CGH algorithms on computationally limited embedded systemsrequires more efficient models with lower computational cost, memory footprint,and power consumption. In this study, we developed a lightweight model forcomplex hologram generation by introducing neural network quantization.Specifically, we built a model based on tensor holography and quantized it from32-bit floating-point precision (FP32) to 8-bit integer precision (INT8). Ourperformance evaluation shows that the proposed INT8 model achieves hologramquality comparable to that of the FP32 model while reducing the model size byapproximately 70% and increasing the speed fourfold. Additionally, weimplemented the INT8 model on a system-on-module to demonstrate itsdeployability on embedded platforms and high power efficiency.
计算机生成全息(CGH)是一种很有前途的增强现实显示技术,例如头戴式或平视显示器。然而,它对计算能力的高要求使其无法付诸实施。最近,将神经网络集成到 CGH 中的努力成功地加快了计算速度,证明了克服计算成本与图像质量之间权衡的潜力。然而,在计算能力有限的嵌入式系统上部署基于神经网络的 CGH 算法需要更高效的模型、更低的计算成本、内存占用和功耗。具体来说,我们建立了一个基于张量全息的模型,并将其从 32 位浮点精度(FP32)量化为 8 位整数精度(INT8)。我们的性能评估结果表明,所提出的 INT8 模型实现了与 FP32 模型相当的全息质量,同时将模型大小减少了约 70%,速度提高了四倍。此外,我们还在一个系统模块上实现了 INT8 模型,以证明它在嵌入式平台上的可部署性和高能效。
{"title":"Quantized neural network for complex hologram generation","authors":"Yutaka Endo, Minoru Oikawa, Timothy D. Wilkinson, Tomoyoshi Shimobaba, Tomoyoshi Ito","doi":"arxiv-2409.06711","DOIUrl":"https://doi.org/arxiv-2409.06711","url":null,"abstract":"Computer-generated holography (CGH) is a promising technology for augmented\u0000reality displays, such as head-mounted or head-up displays. However, its high\u0000computational demand makes it impractical for implementation. Recent efforts to\u0000integrate neural networks into CGH have successfully accelerated computing\u0000speed, demonstrating the potential to overcome the trade-off between\u0000computational cost and image quality. Nevertheless, deploying neural\u0000network-based CGH algorithms on computationally limited embedded systems\u0000requires more efficient models with lower computational cost, memory footprint,\u0000and power consumption. In this study, we developed a lightweight model for\u0000complex hologram generation by introducing neural network quantization.\u0000Specifically, we built a model based on tensor holography and quantized it from\u000032-bit floating-point precision (FP32) to 8-bit integer precision (INT8). Our\u0000performance evaluation shows that the proposed INT8 model achieves hologram\u0000quality comparable to that of the FP32 model while reducing the model size by\u0000approximately 70% and increasing the speed fourfold. Additionally, we\u0000implemented the INT8 model on a system-on-module to demonstrate its\u0000deployability on embedded platforms and high power efficiency.","PeriodicalId":501174,"journal":{"name":"arXiv - CS - Graphics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142221922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
McGrids: Monte Carlo-Driven Adaptive Grids for Iso-Surface Extraction McGrids:用于等值面提取的蒙特卡罗驱动自适应网格
Pub Date : 2024-08-25 DOI: arxiv-2409.06710
Daxuan Renınst, Hezi Shiınst, Jianmin Zheng, Jianfei Cai
Iso-surface extraction from an implicit field is a fundamental process invarious applications of computer vision and graphics. When dealing withgeometric shapes with complicated geometric details, many existing algorithmssuffer from high computational costs and memory usage. This paper proposesMcGrids, a novel approach to improve the efficiency of iso-surface extraction.The key idea is to construct adaptive grids for iso-surface extraction ratherthan using a simple uniform grid as prior art does. Specifically, we formulatethe problem of constructing adaptive grids as a probability sampling problem,which is then solved by Monte Carlo process. We demonstrate McGrids' capabilitywith extensive experiments from both analytical SDFs computed from surfacemeshes and learned implicit fields from real multiview images. The experimentresults show that our McGrids can significantly reduce the number of implicitfield queries, resulting in significant memory reduction, while producinghigh-quality meshes with rich geometric details.
从隐式场中提取等值面是计算机视觉和图形学各种应用中的一个基本过程。在处理具有复杂几何细节的几何形状时,许多现有算法都面临计算成本高、内存占用大的问题。本文提出了一种提高等值面提取效率的新方法--McGrids,其主要思想是为等值面提取构建自适应网格,而不是像现有技术那样使用简单的均匀网格。具体来说,我们将构建自适应网格的问题表述为一个概率抽样问题,然后通过蒙特卡罗过程来解决。我们通过大量实验证明了 McGrids 的能力,这些实验既包括从曲面剪影中计算出的分析 SDF,也包括从真实多视角图像中学习到的隐式场。实验结果表明,我们的 McGrids 可以大大减少隐式场查询的次数,从而显著减少内存,同时生成具有丰富几何细节的高质量网格。
{"title":"McGrids: Monte Carlo-Driven Adaptive Grids for Iso-Surface Extraction","authors":"Daxuan Renınst, Hezi Shiınst, Jianmin Zheng, Jianfei Cai","doi":"arxiv-2409.06710","DOIUrl":"https://doi.org/arxiv-2409.06710","url":null,"abstract":"Iso-surface extraction from an implicit field is a fundamental process in\u0000various applications of computer vision and graphics. When dealing with\u0000geometric shapes with complicated geometric details, many existing algorithms\u0000suffer from high computational costs and memory usage. This paper proposes\u0000McGrids, a novel approach to improve the efficiency of iso-surface extraction.\u0000The key idea is to construct adaptive grids for iso-surface extraction rather\u0000than using a simple uniform grid as prior art does. Specifically, we formulate\u0000the problem of constructing adaptive grids as a probability sampling problem,\u0000which is then solved by Monte Carlo process. We demonstrate McGrids' capability\u0000with extensive experiments from both analytical SDFs computed from surface\u0000meshes and learned implicit fields from real multiview images. The experiment\u0000results show that our McGrids can significantly reduce the number of implicit\u0000field queries, resulting in significant memory reduction, while producing\u0000high-quality meshes with rich geometric details.","PeriodicalId":501174,"journal":{"name":"arXiv - CS - Graphics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142221934","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Real-Time Rendering of Glints in the Presence of Area Lights 区域照明下的闪光实时渲染
Pub Date : 2024-08-24 DOI: arxiv-2408.13611
Tom Kneiphof, Reinhard Klein
Many real-world materials are characterized by a glittery appearance.Reproducing this effect in physically based renderings is a challenging problemdue to its discrete nature, especially in real-time applications which requirea consistently low runtime. Recent work focuses on glittery appearanceilluminated by infinitesimally small light sources only. For light sources likethe sun this approximation is a reasonable choice. In the real world however,all light sources are fundamentally area light sources. In this paper, wederive an efficient method for rendering glints illuminated by spatiallyconstant diffuse area lights in real time. To this end, we require an adequateestimate for the probability of a single microfacet to be correctly orientedfor reflection from the source to the observer. A good estimate is achievedeither using linearly transformed cosines (LTC) for large light sources, or alocally constant approximation of the normal distribution for small sphericalcaps of light directions. To compute the resulting number of reflectingmicrofacets, we employ a counting model based on the binomial distribution. Inthe evaluation, we demonstrate the visual accuracy of our approach, which iseasily integrated into existing real-time rendering frameworks, especially ifthey already implement shading for area lights using LTCs and a counting modelfor glint shading under point and directional illumination. Besides theoverhead of the preexisting constituents, our method adds little to noadditional overhead.
在基于物理的渲染中再现这种效果是一个具有挑战性的问题,因为它具有离散性,尤其是在要求持续低运行时间的实时应用中。最近的研究主要集中在仅由无限小的光源照亮的闪烁外观上。对于太阳等光源,这种近似方法是合理的选择。但在现实世界中,所有光源基本上都是面积光源。在本文中,我们将提出一种高效的方法,用于实时呈现由空间恒定的漫射区域光源照射的闪光。为此,我们需要对单个微切面从光源反射到观察者的正确方向概率进行充分估计。对于大型光源,可以使用线性变换余弦(LTC),对于小型球形光源,可以使用正态分布的近似值。为了计算出反射微孔的数量,我们采用了基于二项分布的计数模型。在评估中,我们证明了我们的方法在视觉上的准确性,这种方法很容易集成到现有的实时渲染框架中,特别是如果这些框架已经使用 LTC 实现了区域光的着色,并使用计数模型实现了点光源和定向光源下的闪烁着色。除了现有组件的开销外,我们的方法几乎不会增加额外的开销。
{"title":"Real-Time Rendering of Glints in the Presence of Area Lights","authors":"Tom Kneiphof, Reinhard Klein","doi":"arxiv-2408.13611","DOIUrl":"https://doi.org/arxiv-2408.13611","url":null,"abstract":"Many real-world materials are characterized by a glittery appearance.\u0000Reproducing this effect in physically based renderings is a challenging problem\u0000due to its discrete nature, especially in real-time applications which require\u0000a consistently low runtime. Recent work focuses on glittery appearance\u0000illuminated by infinitesimally small light sources only. For light sources like\u0000the sun this approximation is a reasonable choice. In the real world however,\u0000all light sources are fundamentally area light sources. In this paper, we\u0000derive an efficient method for rendering glints illuminated by spatially\u0000constant diffuse area lights in real time. To this end, we require an adequate\u0000estimate for the probability of a single microfacet to be correctly oriented\u0000for reflection from the source to the observer. A good estimate is achieved\u0000either using linearly transformed cosines (LTC) for large light sources, or a\u0000locally constant approximation of the normal distribution for small spherical\u0000caps of light directions. To compute the resulting number of reflecting\u0000microfacets, we employ a counting model based on the binomial distribution. In\u0000the evaluation, we demonstrate the visual accuracy of our approach, which is\u0000easily integrated into existing real-time rendering frameworks, especially if\u0000they already implement shading for area lights using LTCs and a counting model\u0000for glint shading under point and directional illumination. Besides the\u0000overhead of the preexisting constituents, our method adds little to no\u0000additional overhead.","PeriodicalId":501174,"journal":{"name":"arXiv - CS - Graphics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142221919","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
BiGS: Bidirectional Gaussian Primitives for Relightable 3D Gaussian Splatting BiGS:用于可重光三维高斯拼接的双向高斯基元
Pub Date : 2024-08-23 DOI: arxiv-2408.13370
Zhenyuan Liu, Yu Guo, Xinyuan Li, Bernd Bickel, Ran Zhang
We present Bidirectional Gaussian Primitives, an image-based novel viewsynthesis technique designed to represent and render 3D objects with surfaceand volumetric materials under dynamic illumination. Our approach integrateslight intrinsic decomposition into the Gaussian splatting framework, enablingreal-time relighting of 3D objects. To unify surface and volumetric materialwithin a cohesive appearance model, we adopt a light- and view-dependentscattering representation via bidirectional spherical harmonics. Our model doesnot use a specific surface normal-related reflectance function, making it morecompatible with volumetric representations like Gaussian splatting, where thenormals are undefined. We demonstrate our method by reconstructing andrendering objects with complex materials. Using One-Light-At-a-Time (OLAT) dataas input, we can reproduce photorealistic appearances under novel lightingconditions in real time.
我们提出了双向高斯基元(Bidirectional Gaussian Primitives),这是一种基于图像的新型视图合成技术,旨在动态照明下表现和渲染具有表面和体积材料的三维物体。我们的方法将光本征分解集成到高斯拼接框架中,从而实现了三维物体的实时再照明。为了将表面和体积材料统一到一个有内聚力的外观模型中,我们通过双向球面谐波采用了一种与光线和视角相关的散射表示方法。我们的模型不使用特定的表面法线相关反射函数,因此与高斯拼接等体积表示法更加兼容,因为在这种表示法中,法线是不确定的。我们通过重建和渲染具有复杂材料的物体来演示我们的方法。使用 "一次光照"(OLAT)数据作为输入,我们可以在新颖的光照条件下实时再现逼真的外观。
{"title":"BiGS: Bidirectional Gaussian Primitives for Relightable 3D Gaussian Splatting","authors":"Zhenyuan Liu, Yu Guo, Xinyuan Li, Bernd Bickel, Ran Zhang","doi":"arxiv-2408.13370","DOIUrl":"https://doi.org/arxiv-2408.13370","url":null,"abstract":"We present Bidirectional Gaussian Primitives, an image-based novel view\u0000synthesis technique designed to represent and render 3D objects with surface\u0000and volumetric materials under dynamic illumination. Our approach integrates\u0000light intrinsic decomposition into the Gaussian splatting framework, enabling\u0000real-time relighting of 3D objects. To unify surface and volumetric material\u0000within a cohesive appearance model, we adopt a light- and view-dependent\u0000scattering representation via bidirectional spherical harmonics. Our model does\u0000not use a specific surface normal-related reflectance function, making it more\u0000compatible with volumetric representations like Gaussian splatting, where the\u0000normals are undefined. We demonstrate our method by reconstructing and\u0000rendering objects with complex materials. Using One-Light-At-a-Time (OLAT) data\u0000as input, we can reproduce photorealistic appearances under novel lighting\u0000conditions in real time.","PeriodicalId":501174,"journal":{"name":"arXiv - CS - Graphics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142221920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
End-to-end Surface Optimization for Light Control 端到端表面优化,实现光控制
Pub Date : 2024-08-23 DOI: arxiv-2408.13117
Yuou Sun, Bailin Deng, Juyong Zhang
Designing a freeform surface to reflect or refract light to achieve a targetdistribution is a challenging inverse problem. In this paper, we propose anend-to-end optimization strategy for an optical surface mesh. Our formulationleverages a novel differentiable rendering model, and is directly driven by thedifference between the resulting light distribution and the targetdistribution. We also enforce geometric constraints related to fabricationrequirements, to facilitate CNC milling and polishing of the designed surface.To address the issue of local minima, we formulate a face-based optimaltransport problem between the current mesh and the target distribution, whichmakes effective large changes to the surface shape. The combination of ouroptimal transport update and rendering-guided optimization produces an opticalsurface design with a resulting image closely resembling the target, while thefabrication constraints in our optimization help to ensure consistency betweenthe rendering model and the final physical results. The effectiveness of ouralgorithm is demonstrated on a variety of target images using both simulatedrendering and physical prototypes.
设计一个自由曲面来反射或折射光线以实现目标分布是一个具有挑战性的逆问题。在本文中,我们提出了一种针对光学表面网格的端到端优化策略。我们的方案利用了一种新颖的可微分渲染模型,并直接由结果光分布与目标分布之间的差异驱动。为了解决局部最小值问题,我们在当前网格和目标分布之间提出了一个基于面的优化传输问题,从而有效地对表面形状进行大幅改变。最优传输更新与渲染引导优化相结合,产生了与目标图像非常相似的光学表面设计,而优化中的制造约束有助于确保渲染模型与最终物理结果之间的一致性。我们使用模拟渲染和物理原型在各种目标图像上演示了我们算法的有效性。
{"title":"End-to-end Surface Optimization for Light Control","authors":"Yuou Sun, Bailin Deng, Juyong Zhang","doi":"arxiv-2408.13117","DOIUrl":"https://doi.org/arxiv-2408.13117","url":null,"abstract":"Designing a freeform surface to reflect or refract light to achieve a target\u0000distribution is a challenging inverse problem. In this paper, we propose an\u0000end-to-end optimization strategy for an optical surface mesh. Our formulation\u0000leverages a novel differentiable rendering model, and is directly driven by the\u0000difference between the resulting light distribution and the target\u0000distribution. We also enforce geometric constraints related to fabrication\u0000requirements, to facilitate CNC milling and polishing of the designed surface.\u0000To address the issue of local minima, we formulate a face-based optimal\u0000transport problem between the current mesh and the target distribution, which\u0000makes effective large changes to the surface shape. The combination of our\u0000optimal transport update and rendering-guided optimization produces an optical\u0000surface design with a resulting image closely resembling the target, while the\u0000fabrication constraints in our optimization help to ensure consistency between\u0000the rendering model and the final physical results. The effectiveness of our\u0000algorithm is demonstrated on a variety of target images using both simulated\u0000rendering and physical prototypes.","PeriodicalId":501174,"journal":{"name":"arXiv - CS - Graphics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142221921","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Riemannian Approach for Spatiotemporal Analysis and Generation of 4D Tree-shaped Structures 用于时空分析和生成四维树形结构的黎曼方法
Pub Date : 2024-08-22 DOI: arxiv-2408.12443
Tahmina Khanam, Hamid Laga, Mohammed Bennamoun, Guanjin Wang, Ferdous Sohel, Farid Boussaid, Guan Wang, Anuj Srivastava
We propose the first comprehensive approach for modeling and analyzing thespatiotemporal shape variability in tree-like 4D objects, i.e., 3D objectswhose shapes bend, stretch, and change in their branching structure over timeas they deform, grow, and interact with their environment. Our key contributionis the representation of tree-like 3D shapes using Square Root VelocityFunction Trees (SRVFT). By solving the spatial registration in the SRVFT space,which is equipped with an L2 metric, 4D tree-shaped structures becometime-parameterized trajectories in this space. This reduces the problem ofmodeling and analyzing 4D tree-like shapes to that of modeling and analyzingelastic trajectories in the SRVFT space, where elasticity refers to timewarping. In this paper, we propose a novel mathematical representation of theshape space of such trajectories, a Riemannian metric on that space, andcomputational tools for fast and accurate spatiotemporal registration andgeodesics computation between 4D tree-shaped structures. Leveraging thesebuilding blocks, we develop a full framework for modelling the spatiotemporalvariability using statistical models and generating novel 4D tree-likestructures from a set of exemplars. We demonstrate and validate the proposedframework using real 4D plant data.
我们提出了第一种全面的方法来模拟和分析树状四维物体的时空形状变化,即三维物体在变形、生长和与环境相互作用时,其形状会随着时间的推移而弯曲、伸展和改变其分支结构。我们的主要贡献是使用平方根速度函数树(SRVFT)来表示树状三维形状。通过解决 SRVFT 空间中的空间配准问题,4D 树形结构在该空间中就变成了时间参数化的轨迹。这将 4D 树状结构的建模和分析问题简化为 SRVFT 空间中弹性轨迹的建模和分析问题,其中弹性指的是时间扭曲。在本文中,我们提出了此类轨迹的形状空间的新数学表示法、该空间的黎曼度量,以及用于 4D 树状结构之间快速准确的时空配准和大地线计算的计算工具。利用这些构建模块,我们开发了一个完整的框架,利用统计模型对时空可变性进行建模,并从一组示例生成新的 4D 树状结构。我们使用真实的 4D 植物数据演示并验证了所提出的框架。
{"title":"A Riemannian Approach for Spatiotemporal Analysis and Generation of 4D Tree-shaped Structures","authors":"Tahmina Khanam, Hamid Laga, Mohammed Bennamoun, Guanjin Wang, Ferdous Sohel, Farid Boussaid, Guan Wang, Anuj Srivastava","doi":"arxiv-2408.12443","DOIUrl":"https://doi.org/arxiv-2408.12443","url":null,"abstract":"We propose the first comprehensive approach for modeling and analyzing the\u0000spatiotemporal shape variability in tree-like 4D objects, i.e., 3D objects\u0000whose shapes bend, stretch, and change in their branching structure over time\u0000as they deform, grow, and interact with their environment. Our key contribution\u0000is the representation of tree-like 3D shapes using Square Root Velocity\u0000Function Trees (SRVFT). By solving the spatial registration in the SRVFT space,\u0000which is equipped with an L2 metric, 4D tree-shaped structures become\u0000time-parameterized trajectories in this space. This reduces the problem of\u0000modeling and analyzing 4D tree-like shapes to that of modeling and analyzing\u0000elastic trajectories in the SRVFT space, where elasticity refers to time\u0000warping. In this paper, we propose a novel mathematical representation of the\u0000shape space of such trajectories, a Riemannian metric on that space, and\u0000computational tools for fast and accurate spatiotemporal registration and\u0000geodesics computation between 4D tree-shaped structures. Leveraging these\u0000building blocks, we develop a full framework for modelling the spatiotemporal\u0000variability using statistical models and generating novel 4D tree-like\u0000structures from a set of exemplars. We demonstrate and validate the proposed\u0000framework using real 4D plant data.","PeriodicalId":501174,"journal":{"name":"arXiv - CS - Graphics","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142221924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
arXiv - CS - Graphics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1