首页 > 最新文献

ACM SIGGRAPH 2003 Papers最新文献

英文 中文
A data-driven reflectance model 数据驱动的反射率模型
Pub Date : 2003-07-01 DOI: 10.1145/1201775.882343
W. Matusik
We present a generative model for isotropic bidirectional reflectance distribution functions (BRDFs) based on acquired reflectance data. Instead of using analytical reflectance models, we represent each BRDF as a dense set of measurements. This allows us to interpolate and extrapolate in the space of acquired BRDFs to create new BRDFs. We treat each acquired BRDF as a single high-dimensional vector taken from a space of all possible BRDFs. We apply both linear (subspace) and non-linear (manifold) dimensionality reduction tools in an effort to discover a lower-dimensional representation that characterizes our measurements. We let users define perceptually meaningful parametrization directions to navigate in the reduced-dimension BRDF space. On the low-dimensional manifold, movement along these directions produces novel but valid BRDFs.
本文提出了一种基于反射数据的各向同性双向反射分布函数(brdf)生成模型。我们没有使用解析反射率模型,而是将每个BRDF表示为密集的测量集。这允许我们在获得的brdf的空间内进行内插和外推,以创建新的brdf。我们将每个获得的BRDF视为从所有可能BRDF的空间中提取的单个高维向量。我们应用线性(子空间)和非线性(流形)降维工具,努力发现表征我们测量的低维表示。我们让用户定义感知上有意义的参数化方向,以便在降维BRDF空间中导航。在低维流形上,沿着这些方向运动产生新颖但有效的brdf。
{"title":"A data-driven reflectance model","authors":"W. Matusik","doi":"10.1145/1201775.882343","DOIUrl":"https://doi.org/10.1145/1201775.882343","url":null,"abstract":"We present a generative model for isotropic bidirectional reflectance distribution functions (BRDFs) based on acquired reflectance data. Instead of using analytical reflectance models, we represent each BRDF as a dense set of measurements. This allows us to interpolate and extrapolate in the space of acquired BRDFs to create new BRDFs. We treat each acquired BRDF as a single high-dimensional vector taken from a space of all possible BRDFs. We apply both linear (subspace) and non-linear (manifold) dimensionality reduction tools in an effort to discover a lower-dimensional representation that characterizes our measurements. We let users define perceptually meaningful parametrization directions to navigate in the reduced-dimension BRDF space. On the low-dimensional manifold, movement along these directions produces novel but valid BRDFs.","PeriodicalId":314969,"journal":{"name":"ACM SIGGRAPH 2003 Papers","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127010782","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 908
Simplifying complex environments using incremental textured depth meshes 使用增量纹理深度网格简化复杂环境
Pub Date : 2003-07-01 DOI: 10.1145/1201775.882325
Andrew T. Wilson, Dinesh Manocha
We present an incremental algorithm to compute image-based simplifications of a large environment. We use an optimization-based approach to generate samples based on scene visibility, and from each viewpoint create textured depth meshes (TDMs) using sampled range panoramas of the environment. The optimization function minimizes artifacts such as skins and cracks in the reconstruction. We also present an encoding scheme for multiple TDMs that exploits spatial coherence among different viewpoints. The resulting simplifications, incremental textured depth meshes (ITDMs), reduce preprocessing, storage, rendering costs and visible artifacts. Our algorithm has been applied to large, complex synthetic environments comprising millions of primitives. It is able to render them at 20 -- 40 frames a second on a PC with little loss in visual fidelity.
我们提出了一种增量算法来计算基于图像的大型环境简化。我们使用基于优化的方法来生成基于场景可见性的样本,并从每个视点使用环境的采样范围全景图创建纹理深度网格(tdm)。优化函数最大限度地减少了重建中的皮肤和裂缝等工件。我们还提出了一种利用不同视点之间空间一致性的多tdm编码方案。由此产生的简化,增量纹理深度网格(itdm),减少了预处理,存储,渲染成本和可见的伪影。我们的算法已经应用于包含数百万原语的大型、复杂的合成环境。它能够在PC上以每秒20 - 40帧的速度渲染它们,而视觉保真度几乎没有损失。
{"title":"Simplifying complex environments using incremental textured depth meshes","authors":"Andrew T. Wilson, Dinesh Manocha","doi":"10.1145/1201775.882325","DOIUrl":"https://doi.org/10.1145/1201775.882325","url":null,"abstract":"We present an incremental algorithm to compute image-based simplifications of a large environment. We use an optimization-based approach to generate samples based on scene visibility, and from each viewpoint create textured depth meshes (TDMs) using sampled range panoramas of the environment. The optimization function minimizes artifacts such as skins and cracks in the reconstruction. We also present an encoding scheme for multiple TDMs that exploits spatial coherence among different viewpoints. The resulting simplifications, incremental textured depth meshes (ITDMs), reduce preprocessing, storage, rendering costs and visible artifacts. Our algorithm has been applied to large, complex synthetic environments comprising millions of primitives. It is able to render them at 20 -- 40 frames a second on a PC with little loss in visual fidelity.","PeriodicalId":314969,"journal":{"name":"ACM SIGGRAPH 2003 Papers","volume":"223 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122127433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 49
Clustered principal components for precomputed radiance transfer 聚类主成分为预先计算的辐射转移
Pub Date : 2003-07-01 DOI: 10.1145/1201775.882281
Peter-Pike J. Sloan, J. Hall, J. Hart, John M. Snyder
We compress storage and accelerate performance of precomputed radiance transfer (PRT), which captures the way an object shadows, scatters, and reflects light. PRT records over many surface points a transfer matrix. At run-time, this matrix transforms a vector of spherical harmonic coefficients representing distant, low-frequency source lighting into exiting radiance. Per-point transfer matrices form a high-dimensional surface signal that we compress using clustered principal component analysis (CPCA), which partitions many samples into fewer clusters each approximating the signal as an affine subspace. CPCA thus reduces the high-dimensional transfer signal to a low-dimensional set of per-point weights on a per-cluster set of representative matrices. Rather than computing a weighted sum of representatives and applying this result to the lighting, we apply the representatives to the lighting per-cluster (on the CPU) and weight these results per-point (on the GPU). Since the output of the matrix is lower-dimensional than the matrix itself, this reduces computation. We also increase the accuracy of encoded radiance functions with a new least-squares optimal projection of spherical harmonics onto the hemisphere. We describe an implementation on graphics hardware that performs real-time rendering of glossy objects with dynamic self-shadowing and interreflection without fixing the view or light as in previous work. Our approach also allows significantly increased lighting frequency when rendering diffuse objects and includes subsurface scattering.
我们压缩存储并加速预计算辐射传输(PRT)的性能,它捕获物体阴影,散射和反射光的方式。PRT在许多表面点上记录一个传递矩阵。在运行时,该矩阵将表示远处低频光源照明的球面谐波系数向量转换为出射亮度。每个点传输矩阵形成一个高维表面信号,我们使用聚类主成分分析(CPCA)进行压缩,该分析将许多样本划分为更少的聚类,每个聚类都将信号近似为仿射子空间。因此,CPCA将高维传输信号减少到每簇代表性矩阵上的每点权重的低维集合。我们不是计算代表的加权和并将此结果应用于照明,而是将代表应用于每个集群(在CPU上)的照明,并将这些结果加权到每个点(在GPU上)。由于矩阵的输出比矩阵本身的维数低,这就减少了计算量。我们还增加了编码辐射函数的精度与一个新的最小二乘最优投影的球面谐波到半球。我们描述了一个在图形硬件上的实现,该实现具有动态自阴影和互反射的光滑物体的实时渲染,而无需像以前的工作那样固定视图或光线。我们的方法还允许在渲染漫射物体时显著增加照明频率,包括次表面散射。
{"title":"Clustered principal components for precomputed radiance transfer","authors":"Peter-Pike J. Sloan, J. Hall, J. Hart, John M. Snyder","doi":"10.1145/1201775.882281","DOIUrl":"https://doi.org/10.1145/1201775.882281","url":null,"abstract":"We compress storage and accelerate performance of precomputed radiance transfer (PRT), which captures the way an object shadows, scatters, and reflects light. PRT records over many surface points a transfer matrix. At run-time, this matrix transforms a vector of spherical harmonic coefficients representing distant, low-frequency source lighting into exiting radiance. Per-point transfer matrices form a high-dimensional surface signal that we compress using clustered principal component analysis (CPCA), which partitions many samples into fewer clusters each approximating the signal as an affine subspace. CPCA thus reduces the high-dimensional transfer signal to a low-dimensional set of per-point weights on a per-cluster set of representative matrices. Rather than computing a weighted sum of representatives and applying this result to the lighting, we apply the representatives to the lighting per-cluster (on the CPU) and weight these results per-point (on the GPU). Since the output of the matrix is lower-dimensional than the matrix itself, this reduces computation. We also increase the accuracy of encoded radiance functions with a new least-squares optimal projection of spherical harmonics onto the hemisphere. We describe an implementation on graphics hardware that performs real-time rendering of glossy objects with dynamic self-shadowing and interreflection without fixing the view or light as in previous work. Our approach also allows significantly increased lighting frequency when rendering diffuse objects and includes subsurface scattering.","PeriodicalId":314969,"journal":{"name":"ACM SIGGRAPH 2003 Papers","volume":"190 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122838524","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 314
Hierarchical mesh decomposition using fuzzy clustering and cuts 分层网格分解采用模糊聚类和切割
Pub Date : 2003-07-01 DOI: 10.1145/1201775.882369
S. Katz, A. Tal
Cutting up a complex object into simpler sub-objects is a fundamental problem in various disciplines. In image processing, images are segmented while in computational geometry, solid polyhedra are decomposed. In recent years, in computer graphics, polygonal meshes are decomposed into sub-meshes. In this paper we propose a novel hierarchical mesh decomposition algorithm. Our algorithm computes a decomposition into the meaningful components of a given mesh, which generally refers to segmentation at regions of deep concavities. The algorithm also avoids over-segmentation and jaggy boundaries between the components. Finally, we demonstrate the utility of the algorithm in control-skeleton extraction.
将一个复杂的对象分割成更简单的子对象是各个学科的一个基本问题。在图像处理中,对图像进行分割,而在计算几何中,对实体多面体进行分解。近年来,在计算机图形学中,将多边形网格分解为子网格。本文提出了一种新的分层网格分解算法。我们的算法将给定网格分解为有意义的组件,这通常指的是在深凹区域进行分割。该算法还避免了过度分割和组件之间的锯齿边界。最后,我们演示了该算法在控制骨架提取中的实用性。
{"title":"Hierarchical mesh decomposition using fuzzy clustering and cuts","authors":"S. Katz, A. Tal","doi":"10.1145/1201775.882369","DOIUrl":"https://doi.org/10.1145/1201775.882369","url":null,"abstract":"Cutting up a complex object into simpler sub-objects is a fundamental problem in various disciplines. In image processing, images are segmented while in computational geometry, solid polyhedra are decomposed. In recent years, in computer graphics, polygonal meshes are decomposed into sub-meshes. In this paper we propose a novel hierarchical mesh decomposition algorithm. Our algorithm computes a decomposition into the meaningful components of a given mesh, which generally refers to segmentation at regions of deep concavities. The algorithm also avoids over-segmentation and jaggy boundaries between the components. Finally, we demonstrate the utility of the algorithm in control-skeleton extraction.","PeriodicalId":314969,"journal":{"name":"ACM SIGGRAPH 2003 Papers","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133936255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 744
Smoke simulation for large scale phenomena 大尺度现象的烟雾模拟
Pub Date : 2003-07-01 DOI: 10.1145/1201775.882335
Nick Rasmussen, Duc Quang Nguyen, William A. Geiger, Ronald Fedkiw
In this paper, we present an efficient method for simulating highly detailed large scale participating media such as the nuclear explosions shown in figure 1. We capture this phenomena by simulating the motion of particles in a fluid dynamics generated velocity field. A novel aspect of this paper is the creation of highly detailed three-dimensional turbulent velocity fields at interactive rates using a low to moderate amount of memory. The key idea is the combination of two-dimensional high resolution physically based flow fields with a moderate sized three-dimensional Kolmogorov velocity field tiled periodically in space.
在本文中,我们提出了一种有效的方法来模拟高度详细的大规模参与介质,如图1所示的核爆炸。我们通过模拟粒子在流体动力学生成的速度场中的运动来捕捉这一现象。本文的一个新颖的方面是创建高度详细的三维湍流速度场在交互速率使用低到中等数量的内存。其关键思想是将二维高分辨率物理流场与中等大小的三维Kolmogorov速度场在空间中周期性平铺相结合。
{"title":"Smoke simulation for large scale phenomena","authors":"Nick Rasmussen, Duc Quang Nguyen, William A. Geiger, Ronald Fedkiw","doi":"10.1145/1201775.882335","DOIUrl":"https://doi.org/10.1145/1201775.882335","url":null,"abstract":"In this paper, we present an efficient method for simulating highly detailed large scale participating media such as the nuclear explosions shown in figure 1. We capture this phenomena by simulating the motion of particles in a fluid dynamics generated velocity field. A novel aspect of this paper is the creation of highly detailed three-dimensional turbulent velocity fields at interactive rates using a low to moderate amount of memory. The key idea is the combination of two-dimensional high resolution physically based flow fields with a moderate sized three-dimensional Kolmogorov velocity field tiled periodically in space.","PeriodicalId":314969,"journal":{"name":"ACM SIGGRAPH 2003 Papers","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133139432","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 177
TreeJuxtaposer: scalable tree comparison using Focus+Context with guaranteed visibility tree并置:可扩展的树比较使用焦点+上下文与保证可见性
Pub Date : 2003-07-01 DOI: 10.1145/1201775.882291
T. Munzner, François Guimbretière, S. Tasiran, Li Zhang, Yunhong Zhou
Structural comparison of large trees is a difficult task that is only partially supported by current visualization techniques, which are mainly designed for browsing. We present TreeJuxtaposer, a system designed to support the comparison task for large trees of several hundred thousand nodes. We introduce the idea of "guaranteed visibility", where highlighted areas are treated as landmarks that must remain visually apparent at all times. We propose a new methodology for detailed structural comparison between two trees and provide a new nearly-linear algorithm for computing the best corresponding node from one tree to another. In addition, we present a new rectilinear Focus+Context technique for navigation that is well suited to the dynamic linking of side-by-side views while guaranteeing landmark visibility and constant frame rates. These three contributions result in a system delivering a fluid exploration experience that scales both in the size of the dataset and the number of pixels in the display. We have based the design decisions for our system on the needs of a target audience of biologists who must understand the structural details of many phylogenetic, or evolutionary, trees. Our tool is also useful in many other application domains where tree comparison is needed, ranging from network management to call graph optimization to genealogy.
大型树的结构比较是一项困难的任务,目前的可视化技术仅部分支持,主要用于浏览。我们提出了tree并置器,一个系统,旨在支持几十万个节点的大型树的比较任务。我们引入了“保证可见性”的概念,其中突出显示的区域被视为地标,必须始终保持视觉上的明显。我们提出了一种新的方法来进行两棵树之间的详细结构比较,并提供了一种新的近线性算法来计算从一棵树到另一棵树的最佳对应节点。此外,我们提出了一种新的直线聚焦+上下文导航技术,该技术非常适合于并排视图的动态链接,同时保证地标可见性和恒定的帧率。这三个方面的贡献使系统提供了一种流畅的探索体验,无论在数据集的大小还是显示的像素数量上都是如此。我们基于目标受众生物学家的需求来设计我们的系统,这些目标受众必须了解许多系统发生或进化树的结构细节。我们的工具在许多其他需要树比较的应用程序领域中也很有用,从网络管理到调用图优化再到家谱。
{"title":"TreeJuxtaposer: scalable tree comparison using Focus+Context with guaranteed visibility","authors":"T. Munzner, François Guimbretière, S. Tasiran, Li Zhang, Yunhong Zhou","doi":"10.1145/1201775.882291","DOIUrl":"https://doi.org/10.1145/1201775.882291","url":null,"abstract":"Structural comparison of large trees is a difficult task that is only partially supported by current visualization techniques, which are mainly designed for browsing. We present TreeJuxtaposer, a system designed to support the comparison task for large trees of several hundred thousand nodes. We introduce the idea of \"guaranteed visibility\", where highlighted areas are treated as landmarks that must remain visually apparent at all times. We propose a new methodology for detailed structural comparison between two trees and provide a new nearly-linear algorithm for computing the best corresponding node from one tree to another. In addition, we present a new rectilinear Focus+Context technique for navigation that is well suited to the dynamic linking of side-by-side views while guaranteeing landmark visibility and constant frame rates. These three contributions result in a system delivering a fluid exploration experience that scales both in the size of the dataset and the number of pixels in the display. We have based the design decisions for our system on the needs of a target audience of biologists who must understand the structural details of many phylogenetic, or evolutionary, trees. Our tool is also useful in many other application domains where tree comparison is needed, ranging from network management to call graph optimization to genealogy.","PeriodicalId":314969,"journal":{"name":"ACM SIGGRAPH 2003 Papers","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128283372","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 338
Rhythmic-motion synthesis based on motion-beat analysis 基于动作-节拍分析的节奏-动作合成
Pub Date : 2003-07-01 DOI: 10.1145/1201775.882283
Tae-Hoon Kim, Sang Il Park, Sung-yong Shin
Real-time animation of human-like characters is an active research area in computer graphics. The conventional approaches have, however, hardly dealt with the rhythmic patterns of motions, which are essential in handling rhythmic motions such as dancing and locomotive motions. In this paper, we present a novel scheme for synthesizing a new motion from unlabelled example motions while preserving their rhythmic pattern. Our scheme first captures the motion beats from the example motions to extract the basic movements and their transitions. Based on those data, our scheme then constructs a movement transition graph that represents the example motions. Given an input sound signal, our scheme finally synthesizes a novel motion in an on-line manner while traversing the motion transition graph, which is synchronized with the input sound signal and also satisfies kinematic constraints given explicitly and implicitly. Through experiments, we have demonstrated that our scheme can effectively produce a variety of rhythmic motions.
类人角色的实时动画是计算机图形学中一个活跃的研究领域。然而,传统的方法几乎没有处理运动的节奏模式,这是处理舞蹈和机车运动等节奏运动所必需的。在本文中,我们提出了一种从未标记的示例运动中合成新运动的新方案,同时保留了它们的节奏模式。我们的方案首先从示例运动中捕获运动节拍,以提取基本运动及其过渡。基于这些数据,我们的方案然后构建一个表示示例运动的运动过渡图。在给定输入声音信号的情况下,通过遍历运动转换图,最终在线合成一个新的运动,该运动与输入声音信号同步,同时满足显式和隐式给出的运动约束。通过实验,我们证明了我们的方案可以有效地产生各种有节奏的运动。
{"title":"Rhythmic-motion synthesis based on motion-beat analysis","authors":"Tae-Hoon Kim, Sang Il Park, Sung-yong Shin","doi":"10.1145/1201775.882283","DOIUrl":"https://doi.org/10.1145/1201775.882283","url":null,"abstract":"Real-time animation of human-like characters is an active research area in computer graphics. The conventional approaches have, however, hardly dealt with the rhythmic patterns of motions, which are essential in handling rhythmic motions such as dancing and locomotive motions. In this paper, we present a novel scheme for synthesizing a new motion from unlabelled example motions while preserving their rhythmic pattern. Our scheme first captures the motion beats from the example motions to extract the basic movements and their transitions. Based on those data, our scheme then constructs a movement transition graph that represents the example motions. Given an input sound signal, our scheme finally synthesizes a novel motion in an on-line manner while traversing the motion transition graph, which is synchronized with the input sound signal and also satisfies kinematic constraints given explicitly and implicitly. Through experiments, we have demonstrated that our scheme can effectively produce a variety of rhythmic motions.","PeriodicalId":314969,"journal":{"name":"ACM SIGGRAPH 2003 Papers","volume":"655 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133061755","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 214
Relighting with 4D incident light fields 用4D入射光场重新照明
Pub Date : 2003-07-01 DOI: 10.1145/1201775.882315
Vincent Masselus, P. Peers, P. Dutré, Y. Willems
We present an image-based technique to relight real objects illuminated by a 4D incident light field, representing the illumination of an environment. By exploiting the richness in angular and spatial variation of the light field, objects can be relit with a high degree of realism.We record photographs of an object, illuminated from various positions and directions, using a projector mounted on a gantry as a moving light source. The resulting basis images are used to create a subset of the full reflectance field of the object. Using this reflectance field, we can create an image of the object, relit with any incident light field and observed from a flxed camera position.To maintain acceptable recording times and reduce the amount of data, we propose an efficient data acquisition method.Since the object can be relit with a 4D incident light field, illumination effects encoded in the light field, such as shafts of shadow or spot light effects, can be realized.
我们提出了一种基于图像的技术来重新照亮真实物体,由4D入射光场照亮,代表环境的照明。利用光场在角度和空间变化上的丰富性,可以使物体具有高度的真实感。我们记录一个物体的照片,从不同的位置和方向照明,使用投影仪安装在一个龙门架作为移动光源。产生的基图像用于创建对象的全反射场的子集。利用这个反射场,我们可以创建一个物体的图像,与任何入射光场重新组合,并从一个固定的相机位置观察。为了保持可接受的记录时间和减少数据量,我们提出了一种有效的数据采集方法。由于物体可以用四维入射光场进行再现,因此可以实现在光场中编码的照明效果,例如阴影轴或聚光灯效果。
{"title":"Relighting with 4D incident light fields","authors":"Vincent Masselus, P. Peers, P. Dutré, Y. Willems","doi":"10.1145/1201775.882315","DOIUrl":"https://doi.org/10.1145/1201775.882315","url":null,"abstract":"We present an image-based technique to relight real objects illuminated by a 4D incident light field, representing the illumination of an environment. By exploiting the richness in angular and spatial variation of the light field, objects can be relit with a high degree of realism.We record photographs of an object, illuminated from various positions and directions, using a projector mounted on a gantry as a moving light source. The resulting basis images are used to create a subset of the full reflectance field of the object. Using this reflectance field, we can create an image of the object, relit with any incident light field and observed from a flxed camera position.To maintain acceptable recording times and reduce the amount of data, we propose an efficient data acquisition method.Since the object can be relit with a 4D incident light field, illumination effects encoded in the light field, such as shafts of shadow or spot light effects, can be realized.","PeriodicalId":314969,"journal":{"name":"ACM SIGGRAPH 2003 Papers","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115430919","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 137
Twister: a space-warp operator for the two-handed editing of 3D shapes Twister:用于双手编辑3D形状的空间扭曲操作器
Pub Date : 2003-07-01 DOI: 10.1145/1201775.882323
Ignacio Llamas, ByungMoon Kim, Joshua Gargus, J. Rossignac, Chris Shaw
A free-form deformation that warps a surface or solid may be specified in terms of one or several point-displacement constraints that must be interpolated by the deformation. The Twister approach introduced here, adds the capability to impose an orientation change, adding three rotational constraints, at each displaced point. Furthermore, it solves for a space warp that simultaneously interpolates two sets of such displacement and orientation constraints. With a 6 DoF magnetic tracker in each hand, the user may grab two points on or near the surface of an object and simultaneously drag them to new locations while rotating the trackers to tilt, bend, or twist the shape near the displaced points. Using a new formalism based on a weighted average of screw displacements, Twister computes in realtime a smooth deformation, whose effect decays with distance from the grabbed points, simultaneously interpolating the 12 constraints. It is continuously applied to the shape, providing realtime graphic feedback. The two-hand interface and the resulting deformation are intuitive and hence offer an effective direct manipulation tool for creating or modifying 3D shapes.
扭曲曲面或实体的自由变形可以根据一个或几个点位移约束来指定,这些约束必须由变形插值。这里介绍的Twister方法增加了施加方向变化的能力,在每个位移点增加了三个旋转约束。此外,它还解决了同时插入两组这样的位移和方向约束的空间扭曲问题。每只手上都有一个6自由度的磁跟踪器,用户可以抓住物体表面上或附近的两个点,同时将它们拖到新的位置,同时旋转跟踪器以倾斜、弯曲或扭曲位移点附近的形状。Twister使用一种基于螺旋位移加权平均的新形式,实时计算平滑变形,其效果随着与抓取点的距离而衰减,同时插值12个约束。它连续应用于形状,提供实时图形反馈。双手界面和由此产生的变形是直观的,因此为创建或修改3D形状提供了有效的直接操作工具。
{"title":"Twister: a space-warp operator for the two-handed editing of 3D shapes","authors":"Ignacio Llamas, ByungMoon Kim, Joshua Gargus, J. Rossignac, Chris Shaw","doi":"10.1145/1201775.882323","DOIUrl":"https://doi.org/10.1145/1201775.882323","url":null,"abstract":"A free-form deformation that warps a surface or solid may be specified in terms of one or several point-displacement constraints that must be interpolated by the deformation. The Twister approach introduced here, adds the capability to impose an orientation change, adding three rotational constraints, at each displaced point. Furthermore, it solves for a space warp that simultaneously interpolates two sets of such displacement and orientation constraints. With a 6 DoF magnetic tracker in each hand, the user may grab two points on or near the surface of an object and simultaneously drag them to new locations while rotating the trackers to tilt, bend, or twist the shape near the displaced points. Using a new formalism based on a weighted average of screw displacements, Twister computes in realtime a smooth deformation, whose effect decays with distance from the grabbed points, simultaneously interpolating the 12 constraints. It is continuously applied to the shape, providing realtime graphic feedback. The two-hand interface and the resulting deformation are intuitive and hence offer an effective direct manipulation tool for creating or modifying 3D shapes.","PeriodicalId":314969,"journal":{"name":"ACM SIGGRAPH 2003 Papers","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116270226","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 125
Light scattering from human hair fibers 人类头发纤维的光散射
Pub Date : 2003-07-01 DOI: 10.1145/1201775.882345
Steve Marschner, H. Jensen, Mike Cammarano, Steven Worley, P. Hanrahan
Light scattering from hair is normally simulated in computer graphics using Kajiya and Kay's classic phenomenological model. We have made new measurements of scattering from individual hair fibers that exhibit visually significant effects not predicted by Kajiya and Kay's model. Our measurements go beyond previous hair measurements by examining out-of-plane scattering, and together with this previous work they show a multiple specular highlight and variation in scattering with rotation about the fiber axis. We explain the sources of these effects using a model of a hair fiber as a transparent elliptical cylinder with an absorbing interior and a surface covered with tilted scales. Based on an analytical scattering function for a circular cylinder, we propose a practical shading model for hair that qualitatively matches the scattering behavior shown in the measurements. In a comparison between a photograph and rendered images, we demonstrate the new model's ability to match the appearance of real hair.
头发的光散射通常在计算机图形学中使用Kajiya和Kay的经典现象学模型进行模拟。我们对单个头发纤维的散射进行了新的测量,显示出Kajiya和Kay的模型没有预测到的视觉上的显著影响。通过检测面外散射,我们的测量结果超越了之前的头发测量结果,并与之前的研究结果一起显示了围绕纤维轴旋转的多重镜面高光和散射变化。我们使用一个毛发纤维模型来解释这些影响的来源,该模型是一个透明的椭圆形圆柱体,具有吸收内部和表面覆盖倾斜鳞片。基于圆柱体的解析散射函数,我们提出了一个实用的头发遮阳模型,该模型定性地与测量中显示的散射行为相匹配。在照片和渲染图像之间的比较中,我们展示了新模型匹配真实头发外观的能力。
{"title":"Light scattering from human hair fibers","authors":"Steve Marschner, H. Jensen, Mike Cammarano, Steven Worley, P. Hanrahan","doi":"10.1145/1201775.882345","DOIUrl":"https://doi.org/10.1145/1201775.882345","url":null,"abstract":"Light scattering from hair is normally simulated in computer graphics using Kajiya and Kay's classic phenomenological model. We have made new measurements of scattering from individual hair fibers that exhibit visually significant effects not predicted by Kajiya and Kay's model. Our measurements go beyond previous hair measurements by examining out-of-plane scattering, and together with this previous work they show a multiple specular highlight and variation in scattering with rotation about the fiber axis. We explain the sources of these effects using a model of a hair fiber as a transparent elliptical cylinder with an absorbing interior and a surface covered with tilted scales. Based on an analytical scattering function for a circular cylinder, we propose a practical shading model for hair that qualitatively matches the scattering behavior shown in the measurements. In a comparison between a photograph and rendered images, we demonstrate the new model's ability to match the appearance of real hair.","PeriodicalId":314969,"journal":{"name":"ACM SIGGRAPH 2003 Papers","volume":"110 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123550798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 318
期刊
ACM SIGGRAPH 2003 Papers
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1