首页 > 最新文献

Proceedings of the 29th annual conference on Computer graphics and interactive techniques最新文献

英文 中文
The SAGE graphics architecture SAGE图形架构
M. Deering, David Naegle
The Scalable, Advanced Graphics Environment (SAGE) is a new high-end, multi-chip rendering architecture. Each single SAGE board can render in excess of 80 million fully lit, textured, anti-aliased triangles per second. SAGE brings high quality antialiasing filters to video rate hardware for the first time. To achieve this, the concept of a frame buffer is replaced by a fully double-buffered sample buffer of between 1 and 16 non-uniformly placed samples per final output pixel. The video output raster of samples is subject to convolution by a 5x5 programmable reconstruction and bandpass filter that replaces the traditional RAMDAC. The reconstruction filter processes up to 400 samples per output pixel, and supports any radially symmetric filter, including those with negative lobes (full Mitchell-Netravali filter). Each SAGE board comprises four parallel rendering sub-units, and supports up to two video output channels. Multiple SAGE systems can be tiled together to support even higher fill rates, resolutions, and performance.
可扩展的高级图形环境(SAGE)是一种新的高端多芯片渲染架构。每个单一的SAGE板每秒可以渲染超过8000万个完全照明,纹理,抗锯齿的三角形。SAGE首次为视频速率硬件带来高质量的抗混叠滤波器。为了实现这一点,帧缓冲区的概念被一个完全双缓冲的样本缓冲区所取代,每个最终输出像素有1到16个不均匀放置的样本。采样的视频输出栅格通过5x5可编程重构带通滤波器进行卷积,该滤波器取代了传统的RAMDAC。重建滤波器每输出像素处理多达400个样本,并支持任何径向对称滤波器,包括那些负瓣(全米切尔-尼特拉瓦利滤波器)。每个SAGE板由四个并行渲染子单元组成,最多支持两个视频输出通道。多个SAGE系统可以拼接在一起,以支持更高的填充率、分辨率和性能。
{"title":"The SAGE graphics architecture","authors":"M. Deering, David Naegle","doi":"10.1145/566570.566638","DOIUrl":"https://doi.org/10.1145/566570.566638","url":null,"abstract":"The Scalable, Advanced Graphics Environment (SAGE) is a new high-end, multi-chip rendering architecture. Each single SAGE board can render in excess of 80 million fully lit, textured, anti-aliased triangles per second. SAGE brings high quality antialiasing filters to video rate hardware for the first time. To achieve this, the concept of a frame buffer is replaced by a fully double-buffered sample buffer of between 1 and 16 non-uniformly placed samples per final output pixel. The video output raster of samples is subject to convolution by a 5x5 programmable reconstruction and bandpass filter that replaces the traditional RAMDAC. The reconstruction filter processes up to 400 samples per output pixel, and supports any radially symmetric filter, including those with negative lobes (full Mitchell-Netravali filter). Each SAGE board comprises four parallel rendering sub-units, and supports up to two video output channels. Multiple SAGE systems can be tiled together to support even higher fill rates, resolutions, and performance.","PeriodicalId":197746,"journal":{"name":"Proceedings of the 29th annual conference on Computer graphics and interactive techniques","volume":"87 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126388263","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 26
Object-based image editing 基于对象的图像编辑
W. Barrett, Alan S. Cheney
We introduce Object-Based Image Editing (OBIE) for real-time animation and manipulation of static digital photographs. Individual image objects (such as an arm or nose, Figure 1) are selected, scaled, stretched, bent, warped or even deleted (with automatic hole filling) - at the object, rather than the pixel level - using simple gesture motions with a mouse. OBIE gives the user direct, local control over object shape, size, and placement while dramatically reducing the time required to perform image editing tasks.Object selection is performed by manually collecting (subobject) regions detected by a watershed algorithm. Objects are tessellated into a triangular mesh, allowing shape modification to be performed in real time using OpenGL's texture mapping hardware.Through the use of anchor points, the user is able to interactively perform editing operations on a whole object, or just part(s) of an object - including moving, scaling, rotating, stretching, bending, and deleting. Indirect manipulation of object shape is also provided through the use of sliders and Bezier curves. Holes created by movement are filled in real-time based on surrounding texture.When objects stretch or scale, we provide a method for preserving texture granularity or scale. We also present a texture brush, which allows the user to "paint" texture into different parts of an image, using existing image texture(s).OBIE allows the user to perform interactive, high-level editing of image objects in a few seconds to a few ten's of seconds
我们介绍了基于对象的图像编辑(OBIE),用于静态数字照片的实时动画和操作。单个图像对象(如手臂或鼻子,图1)被选择、缩放、拉伸、弯曲、扭曲甚至删除(自动填充孔洞)——在对象层面,而不是像素层面——使用鼠标的简单手势运动。OBIE为用户提供了对对象形状、大小和位置的直接、本地控制,同时大大减少了执行图像编辑任务所需的时间。对象选择是通过手动收集分水岭算法检测到的(子对象)区域来完成的。物体被镶嵌成一个三角形网格,允许使用OpenGL的纹理映射硬件实时执行形状修改。通过使用锚点,用户能够交互式地对整个对象或对象的一部分进行编辑操作,包括移动、缩放、旋转、拉伸、弯曲和删除。通过使用滑块和贝塞尔曲线,还提供了对对象形状的间接操作。由运动产生的洞会根据周围的纹理实时填充。当物体拉伸或缩放时,我们提供了一种保留纹理粒度或缩放的方法。我们还提供了一个纹理刷,它允许用户使用现有的图像纹理将纹理“绘制”到图像的不同部分。OBIE允许用户在几秒到几十秒内对图像对象进行交互式高级编辑
{"title":"Object-based image editing","authors":"W. Barrett, Alan S. Cheney","doi":"10.1145/566570.566651","DOIUrl":"https://doi.org/10.1145/566570.566651","url":null,"abstract":"We introduce Object-Based Image Editing (OBIE) for real-time animation and manipulation of static digital photographs. Individual image objects (such as an arm or nose, Figure 1) are selected, scaled, stretched, bent, warped or even deleted (with automatic hole filling) - at the object, rather than the pixel level - using simple gesture motions with a mouse. OBIE gives the user direct, local control over object shape, size, and placement while dramatically reducing the time required to perform image editing tasks.Object selection is performed by manually collecting (subobject) regions detected by a watershed algorithm. Objects are tessellated into a triangular mesh, allowing shape modification to be performed in real time using OpenGL's texture mapping hardware.Through the use of anchor points, the user is able to interactively perform editing operations on a whole object, or just part(s) of an object - including moving, scaling, rotating, stretching, bending, and deleting. Indirect manipulation of object shape is also provided through the use of sliders and Bezier curves. Holes created by movement are filled in real-time based on surrounding texture.When objects stretch or scale, we provide a method for preserving texture granularity or scale. We also present a texture brush, which allows the user to \"paint\" texture into different parts of an image, using existing image texture(s).OBIE allows the user to perform interactive, high-level editing of image objects in a few seconds to a few ten's of seconds","PeriodicalId":197746,"journal":{"name":"Proceedings of the 29th annual conference on Computer graphics and interactive techniques","volume":"82 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127628212","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 122
Articulated body deformation from range scan data 关节体变形从距离扫描数据
Brett Allen, B. Curless, Zoran Popovic
This paper presents an example-based method for calculating skeleton-driven body deformations. Our example data consists of range scans of a human body in a variety of poses. Using markers captured during range scanning, we construct a kinematic skeleton and identify the pose of each scan. We then construct a mutually consistent parameterization of all the scans using a posable subdivision surface template. The detail deformations are represented as displacements from this surface, and holes are filled smoothly within the displacement maps. Finally, we combine the range scans using k-nearest neighbor interpolation in pose space. We demonstrate results for a human upper body with controllable pose, kinematics, and underlying surface shape.
本文提出了一种基于实例的计算骨骼驱动体变形的方法。我们的示例数据包括各种姿势的人体范围扫描。利用距离扫描期间捕获的标记,我们构建了运动学骨架并识别每次扫描的姿态。然后,我们使用可能的细分表面模板构建所有扫描的相互一致的参数化。细节变形表示为该表面的位移,并且在位移图中平滑地填充孔。最后,我们在姿态空间中使用k近邻插值来组合距离扫描。我们展示了具有可控姿态、运动学和底层表面形状的人体上半身的结果。
{"title":"Articulated body deformation from range scan data","authors":"Brett Allen, B. Curless, Zoran Popovic","doi":"10.1145/566570.566626","DOIUrl":"https://doi.org/10.1145/566570.566626","url":null,"abstract":"This paper presents an example-based method for calculating skeleton-driven body deformations. Our example data consists of range scans of a human body in a variety of poses. Using markers captured during range scanning, we construct a kinematic skeleton and identify the pose of each scan. We then construct a mutually consistent parameterization of all the scans using a posable subdivision surface template. The detail deformations are represented as displacements from this surface, and holes are filled smoothly within the displacement maps. Finally, we combine the range scans using k-nearest neighbor interpolation in pose space. We demonstrate results for a human upper body with controllable pose, kinematics, and underlying surface shape.","PeriodicalId":197746,"journal":{"name":"Proceedings of the 29th annual conference on Computer graphics and interactive techniques","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126767208","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 386
Stylization and abstraction of photographs 照片的风格化和抽象化
D. DeCarlo, A. Santella
Good information design depends on clarifying the meaningful structure in an image. We describe a computational approach to stylizing and abstracting photographs that explicitly responds to this design goal. Our system transforms images into a line-drawing style using bold edges and large regions of constant color. To do this, it represents images as a hierarchical structure of parts and boundaries computed using state-of-the-art computer vision. Our system identifies the meaningful elements of this structure using a model of human perception and a record of a user's eye movements in looking at the photo; the system renders a new image using transformations that preserve and highlight these visual elements. Our method thus represents a new alternative for non-photorealistic rendering both in its visual style, in its approach to visual form, and in its techniques for interaction.
好的信息设计依赖于明确图像中有意义的结构。我们描述了一种明确响应这一设计目标的计算方法来对照片进行风格化和抽象。我们的系统将图像转换为使用粗体边缘和恒定颜色的大区域的线条绘制风格。为了做到这一点,它将图像表示为使用最先进的计算机视觉计算的部件和边界的分层结构。我们的系统使用人类感知模型和用户在看照片时的眼球运动记录来识别这种结构的有意义的元素;系统使用保留并突出显示这些视觉元素的转换来呈现新图像。因此,我们的方法在视觉风格、视觉形式和交互技术方面都代表了非真实感渲染的一种新选择。
{"title":"Stylization and abstraction of photographs","authors":"D. DeCarlo, A. Santella","doi":"10.1145/566570.566650","DOIUrl":"https://doi.org/10.1145/566570.566650","url":null,"abstract":"Good information design depends on clarifying the meaningful structure in an image. We describe a computational approach to stylizing and abstracting photographs that explicitly responds to this design goal. Our system transforms images into a line-drawing style using bold edges and large regions of constant color. To do this, it represents images as a hierarchical structure of parts and boundaries computed using state-of-the-art computer vision. Our system identifies the meaningful elements of this structure using a model of human perception and a record of a user's eye movements in looking at the photo; the system renders a new image using transformations that preserve and highlight these visual elements. Our method thus represents a new alternative for non-photorealistic rendering both in its visual style, in its approach to visual form, and in its techniques for interaction.","PeriodicalId":197746,"journal":{"name":"Proceedings of the 29th annual conference on Computer graphics and interactive techniques","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116918147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 577
Robust treatment of collisions, contact and friction for cloth animation 强大的处理碰撞,接触和摩擦布料动画
R. Bridson, Ronald Fedkiw, John Anderson
We present an algorithm to efficiently and robustly process collisions, contact and friction in cloth simulation. It works with any technique for simulating the internal dynamics of the cloth, and allows true modeling of cloth thickness. We also show how our simulation data can be post-processed with a collision-aware subdivision scheme to produce smooth and interference free data for rendering.
提出了一种有效鲁棒地处理布料仿真中碰撞、接触和摩擦的算法。它可以与任何技术模拟布的内部动力学,并允许布料厚度的真实建模。我们还展示了如何使用碰撞感知细分方案对模拟数据进行后处理,以生成用于渲染的平滑和无干扰的数据。
{"title":"Robust treatment of collisions, contact and friction for cloth animation","authors":"R. Bridson, Ronald Fedkiw, John Anderson","doi":"10.1145/566570.566623","DOIUrl":"https://doi.org/10.1145/566570.566623","url":null,"abstract":"We present an algorithm to efficiently and robustly process collisions, contact and friction in cloth simulation. It works with any technique for simulating the internal dynamics of the cloth, and allows true modeling of cloth thickness. We also show how our simulation data can be post-processed with a collision-aware subdivision scheme to produce smooth and interference free data for rendering.","PeriodicalId":197746,"journal":{"name":"Proceedings of the 29th annual conference on Computer graphics and interactive techniques","volume":"101 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117193010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 42
A user interface for interactive cinematic shadow design 交互式电影阴影设计的用户界面
F. Pellacini, P. Tole, D. Greenberg
Placing shadows is difficult task since shadows depend on the relative positions of lights and objects in an unintuitive manner. To simplify the task of the modeler, we present a user interface for designing shadows in 3d environments. In our interface, shadows are treated as first-class modeling primitives just like objects and lights. To transform a shadow, the user can simply move, rescale or rotate the shadow as if it was a 2d object on the scene's surfaces.When the user transforms a shadow, the system moves lights or objects in the scene as required and updates the shadows in realtime during mouse movement. To facilitate interaction, the user can also specify constraints that the shadows must obey, such as never casting a shadow on the face of a character. These constraints are then verified in real-time, limiting mouse movement when necessary. We also integrate in our interface fake shadows typically used in computer animation. This allows the user to draw shadowed and non-shadowed regions directly on surfaces in the scene.
放置阴影是一项困难的任务,因为阴影以一种不直观的方式依赖于光和物体的相对位置。为了简化建模器的任务,我们提出了一个在3d环境中设计阴影的用户界面。在我们的界面中,阴影被当作一级建模原语,就像物体和灯光一样。要变换阴影,用户可以简单地移动、缩放或旋转阴影,就好像它是场景表面上的2d对象一样。当用户变换阴影时,系统会根据需要移动场景中的灯光或物体,并在鼠标移动时实时更新阴影。为了促进交互,用户还可以指定阴影必须遵守的约束,例如永远不要在角色的脸上投射阴影。然后实时验证这些约束,在必要时限制鼠标移动。我们还在我们的界面中集成了通常用于计算机动画的假阴影。这允许用户直接在场景中的表面上绘制阴影和非阴影区域。
{"title":"A user interface for interactive cinematic shadow design","authors":"F. Pellacini, P. Tole, D. Greenberg","doi":"10.1145/566570.566617","DOIUrl":"https://doi.org/10.1145/566570.566617","url":null,"abstract":"Placing shadows is difficult task since shadows depend on the relative positions of lights and objects in an unintuitive manner. To simplify the task of the modeler, we present a user interface for designing shadows in 3d environments. In our interface, shadows are treated as first-class modeling primitives just like objects and lights. To transform a shadow, the user can simply move, rescale or rotate the shadow as if it was a 2d object on the scene's surfaces.When the user transforms a shadow, the system moves lights or objects in the scene as required and updates the shadows in realtime during mouse movement. To facilitate interaction, the user can also specify constraints that the shadows must obey, such as never casting a shadow on the face of a character. These constraints are then verified in real-time, limiting mouse movement when necessary. We also integrate in our interface fake shadows typically used in computer animation. This allows the user to draw shadowed and non-shadowed regions directly on surfaces in the scene.","PeriodicalId":197746,"journal":{"name":"Proceedings of the 29th annual conference on Computer graphics and interactive techniques","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123286681","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 90
Interactive skeleton-driven dynamic deformations 交互式骨架驱动的动态变形
Steve Capell, Seth Green, B. Curless, T. Duchamp, Zoran Popovic
This paper presents a framework for the skeleton-driven animation of elastically deformable characters. A character is embedded in a coarse volumetric control lattice, which provides the structure needed to apply the finite element method. To incorporate skeletal controls, we introduce line constraints along the bones of simple skeletons. The bones are made to coincide with edges of the control lattice, which enables us to apply the constraints efficiently using algebraic methods. To accelerate computation, we associate regions of the volumetric mesh with particular bones and perform locally linearized simulations, which are blended at each time step. We define a hierarchical basis on the control lattice, so for detailed interactions the simulation can adapt the level of detail. We demonstrate the ability to animate complex models using simple skeletons and coarse volumetric meshes in a manner that simulates secondary motions at interactive rates.
本文提出了一个弹性变形角色骨骼驱动动画的框架。字符嵌入在粗体积控制格中,该控制格提供了应用有限元法所需的结构。为了结合骨架控制,我们沿着简单骨架的骨架引入线约束。骨骼被制作成与控制晶格的边缘重合,这使我们能够使用代数方法有效地应用约束。为了加速计算,我们将体积网格的区域与特定的骨骼相关联,并执行局部线性化的模拟,这些模拟在每个时间步混合。我们在控制格上定义了一个层次基础,因此对于详细的交互,仿真可以适应细节级别。我们展示了动画复杂模型的能力,使用简单的骨架和粗体积网格,以一种以交互速率模拟二次运动的方式。
{"title":"Interactive skeleton-driven dynamic deformations","authors":"Steve Capell, Seth Green, B. Curless, T. Duchamp, Zoran Popovic","doi":"10.1145/566570.566622","DOIUrl":"https://doi.org/10.1145/566570.566622","url":null,"abstract":"This paper presents a framework for the skeleton-driven animation of elastically deformable characters. A character is embedded in a coarse volumetric control lattice, which provides the structure needed to apply the finite element method. To incorporate skeletal controls, we introduce line constraints along the bones of simple skeletons. The bones are made to coincide with edges of the control lattice, which enables us to apply the constraints efficiently using algebraic methods. To accelerate computation, we associate regions of the volumetric mesh with particular bones and perform locally linearized simulations, which are blended at each time step. We define a hierarchical basis on the control lattice, so for detailed interactions the simulation can adapt the level of detail. We demonstrate the ability to animate complex models using simple skeletons and coarse volumetric meshes in a manner that simulates secondary motions at interactive rates.","PeriodicalId":197746,"journal":{"name":"Proceedings of the 29th annual conference on Computer graphics and interactive techniques","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130251962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 303
Chromium: a stream-processing framework for interactive rendering on clusters Chromium:用于在集群上进行交互呈现的流处理框架
G. Humphreys, M. Houston, Ren Ng, R. Frank, Sean Ahern, P. Kirchner, James T. Klosowski
We describe Chromium, a system for manipulating streams of graphics API commands on clusters of workstations. Chromium's stream filters can be arranged to create sort-first and sort-last parallel graphics architectures that, in many cases, support the same applications while using only commodity graphics accelerators. In addition, these stream filters can be extended programmatically, allowing the user to customize the stream transformations performed by nodes in a cluster. Because our stream processing mechanism is completely general, any cluster-parallel rendering algorithm can be either implemented on top of or embedded in Chromium. In this paper, we give examples of real-world applications that use Chromium to achieve good scalability on clusters of workstations, and describe other potential uses of this stream processing technology. By completely abstracting the underlying graphics architecture, network topology, and API command processing semantics, we allow a variety of applications to run in different environments.
我们描述Chromium,一个在工作站集群上操作图形API命令流的系统。Chromium的流过滤器可以被安排来创建优先排序和最后排序的并行图形架构,在许多情况下,在只使用普通图形加速器的情况下支持相同的应用程序。此外,这些流过滤器可以通过编程方式进行扩展,允许用户自定义由集群中的节点执行的流转换。因为我们的流处理机制是完全通用的,所以任何集群并行渲染算法都可以在Chromium之上实现或嵌入到Chromium中。在本文中,我们给出了使用Chromium在工作站集群上实现良好可伸缩性的实际应用示例,并描述了这种流处理技术的其他潜在用途。通过完全抽象底层图形架构、网络拓扑和API命令处理语义,我们允许各种应用程序在不同的环境中运行。
{"title":"Chromium: a stream-processing framework for interactive rendering on clusters","authors":"G. Humphreys, M. Houston, Ren Ng, R. Frank, Sean Ahern, P. Kirchner, James T. Klosowski","doi":"10.1145/566570.566639","DOIUrl":"https://doi.org/10.1145/566570.566639","url":null,"abstract":"We describe Chromium, a system for manipulating streams of graphics API commands on clusters of workstations. Chromium's stream filters can be arranged to create sort-first and sort-last parallel graphics architectures that, in many cases, support the same applications while using only commodity graphics accelerators. In addition, these stream filters can be extended programmatically, allowing the user to customize the stream transformations performed by nodes in a cluster. Because our stream processing mechanism is completely general, any cluster-parallel rendering algorithm can be either implemented on top of or embedded in Chromium. In this paper, we give examples of real-world applications that use Chromium to achieve good scalability on clusters of workstations, and describe other potential uses of this stream processing technology. By completely abstracting the underlying graphics architecture, network topology, and API command processing semantics, we allow a variety of applications to run in different environments.","PeriodicalId":197746,"journal":{"name":"Proceedings of the 29th annual conference on Computer graphics and interactive techniques","volume":"343 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124619992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 736
Self-similarity based texture editing 基于自相似性的纹理编辑
Stephen Brooks, N. Dodgson
We present a simple method of interactive texture editing that utilizes self-similarity to replicate intended operations globally over an image. Inspired by the recent successes of hierarchical approaches to texture synthesis, this method also uses multi-scale neighborhoods to assess the similarity of pixels within a texture. However, neighborhood matching is not employed to generate new instances of a texture. We instead locate similar neighborhoods for the purpose of replicating editing operations on the original texture itself, thereby creating a fundamentally new texture. This general approach is applied to texture painting, cloning and warping. These global operations are performed interactively, most often directed with just a single mouse movement.
我们提出了一种简单的交互式纹理编辑方法,利用自相似性在图像上全局复制预期的操作。受最近成功的分层纹理合成方法的启发,该方法还使用多尺度邻域来评估纹理内像素的相似性。然而,邻域匹配不用于生成纹理的新实例。相反,我们定位相似的邻域,目的是在原始纹理本身上复制编辑操作,从而创建一个全新的纹理。这种通用的方法应用于纹理绘制,克隆和翘曲。这些全局操作以交互方式执行,通常只需要一次鼠标移动就可以完成。
{"title":"Self-similarity based texture editing","authors":"Stephen Brooks, N. Dodgson","doi":"10.1145/566570.566632","DOIUrl":"https://doi.org/10.1145/566570.566632","url":null,"abstract":"We present a simple method of interactive texture editing that utilizes self-similarity to replicate intended operations globally over an image. Inspired by the recent successes of hierarchical approaches to texture synthesis, this method also uses multi-scale neighborhoods to assess the similarity of pixels within a texture. However, neighborhood matching is not employed to generate new instances of a texture. We instead locate similar neighborhoods for the purpose of replicating editing operations on the original texture itself, thereby creating a fundamentally new texture. This general approach is applied to texture painting, cloning and warping. These global operations are performed interactively, most often directed with just a single mouse movement.","PeriodicalId":197746,"journal":{"name":"Proceedings of the 29th annual conference on Computer graphics and interactive techniques","volume":"88 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117084478","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 69
Cut-and-paste editing of multiresolution surfaces 剪切和粘贴编辑多分辨率表面
H. Biermann, Ioana M. Boier-Martin, F. Bernardini, D. Zorin
Cutting and pasting to combine different elements into a common structure are widely used operations that have been successfully adapted to many media types. Surface design could also benefit from the availability of a general, robust, and efficient cut-and-paste tool, especially during the initial stages of design when a large space of alternatives needs to be explored. Techniques to support cut-and-paste operations for surfaces have been proposed in the past, but have been of limited usefulness due to constraints on the type of shapes supported and the lack of real-time interaction. In this paper, we describe a set of algorithms based on multiresolution subdivision surfaces that perform at interactive rates and enable intuitive cut-and-paste operations.
剪切和粘贴将不同的元素组合成一个共同的结构是广泛使用的操作,已经成功地适应了许多媒体类型。表面设计也可以受益于通用的、强大的、高效的剪切和粘贴工具,特别是在设计的初始阶段,当需要探索大量替代方案时。过去已经提出了支持表面剪切和粘贴操作的技术,但由于所支持的形状类型的限制和缺乏实时交互,其实用性有限。在本文中,我们描述了一组基于多分辨率细分表面的算法,这些算法以交互速率执行,并实现直观的剪切和粘贴操作。
{"title":"Cut-and-paste editing of multiresolution surfaces","authors":"H. Biermann, Ioana M. Boier-Martin, F. Bernardini, D. Zorin","doi":"10.1145/566570.566583","DOIUrl":"https://doi.org/10.1145/566570.566583","url":null,"abstract":"Cutting and pasting to combine different elements into a common structure are widely used operations that have been successfully adapted to many media types. Surface design could also benefit from the availability of a general, robust, and efficient cut-and-paste tool, especially during the initial stages of design when a large space of alternatives needs to be explored. Techniques to support cut-and-paste operations for surfaces have been proposed in the past, but have been of limited usefulness due to constraints on the type of shapes supported and the lack of real-time interaction. In this paper, we describe a set of algorithms based on multiresolution subdivision surfaces that perform at interactive rates and enable intuitive cut-and-paste operations.","PeriodicalId":197746,"journal":{"name":"Proceedings of the 29th annual conference on Computer graphics and interactive techniques","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131639971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 200
期刊
Proceedings of the 29th annual conference on Computer graphics and interactive techniques
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1