首页 > 最新文献

IEEE Visualization, 2003. VIS 2003.最新文献

英文 中文
Visualization of volume data with quadratic super splines 二次超样条体数据的可视化
Pub Date : 2003-10-22 DOI: 10.1109/VIS.2003.10040
Christian Rössl, Frank Zeilfelder, G. Nürnberger, H. Seidel
We develop a new approach to reconstruct non-discrete models from gridded volume samples. As a model, we use quadratic trivariate super splines on a uniform tetrahedral partition /spl Delta/. The approximating splines are determined in a natural and completely symmetric way by averaging local data samples, such that appropriate smoothness conditions are automatically satisfied. On each tetra-hedron of /spl Delta/ , the quasi-interpolating spline is a polynomial of total degree two which provides several advantages including efficient computation, evaluation and visualization of the model. We apply Bernstein-Bezier techniques well-known in CAGD to compute and evaluate the trivariate spline and its gradient. With this approach the volume data can be visualized efficiently e.g., with isosurface ray-casting. Along an arbitrary ray the splines are univariate, piecewise quadratics and thus the exact intersection for a prescribed isovalue can be easily determined in an analytic and exact way. Our results confirm the efficiency of the quasi-interpolating method and demonstrate high visual quality for rendered isosurfaces.
我们提出了一种从网格体样本重构非离散模型的新方法。作为一个模型,我们在均匀四面体分割/spl Delta/上使用二次三元超样条。通过对局部数据样本进行平均,以自然和完全对称的方式确定近似样条,从而自动满足适当的平滑条件。在/spl Delta/的每一个四面体上,拟插值样条是一个总次为2的多项式,具有计算效率高、评价效率高和模型可视化等优点。我们应用在CAGD中众所周知的Bernstein-Bezier技术来计算和评估三元样条及其梯度。使用这种方法,可以有效地可视化体数据,例如,使用等面光线投射。在任意一条射线上,样条曲线是单变量的、分段的二次曲线,因此可以很容易地用解析和精确的方法确定指定等值的确切交点。我们的结果证实了准插值方法的有效性,并展示了渲染等值面的高视觉质量。
{"title":"Visualization of volume data with quadratic super splines","authors":"Christian Rössl, Frank Zeilfelder, G. Nürnberger, H. Seidel","doi":"10.1109/VIS.2003.10040","DOIUrl":"https://doi.org/10.1109/VIS.2003.10040","url":null,"abstract":"We develop a new approach to reconstruct non-discrete models from gridded volume samples. As a model, we use quadratic trivariate super splines on a uniform tetrahedral partition /spl Delta/. The approximating splines are determined in a natural and completely symmetric way by averaging local data samples, such that appropriate smoothness conditions are automatically satisfied. On each tetra-hedron of /spl Delta/ , the quasi-interpolating spline is a polynomial of total degree two which provides several advantages including efficient computation, evaluation and visualization of the model. We apply Bernstein-Bezier techniques well-known in CAGD to compute and evaluate the trivariate spline and its gradient. With this approach the volume data can be visualized efficiently e.g., with isosurface ray-casting. Along an arbitrary ray the splines are univariate, piecewise quadratics and thus the exact intersection for a prescribed isovalue can be easily determined in an analytic and exact way. Our results confirm the efficiency of the quasi-interpolating method and demonstrate high visual quality for rendered isosurfaces.","PeriodicalId":372131,"journal":{"name":"IEEE Visualization, 2003. VIS 2003.","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123737195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
Volume tracking using higher dimensional isosurfacing 使用高维等表面的体积跟踪
Pub Date : 2003-10-22 DOI: 10.1109/VISUAL.2003.1250374
Guangfeng Ji, Han-Wei Shen, R. Wenger
Tracking and visualizing local features from a time-varying volumetric data allows the user to focus on selected regions of interest, both in space and time, which can lead to a better understanding of the underlying dynamics. In this paper, we present an efficient algorithm to track time-varying isosurfaces and interval volumes using isosurfacing in higher dimensions. Instead of extracting the data features such as isosurfaces or interval volumes separately from multiple time steps and computing the spatial correspondence between those features, our algorithm extracts the correspondence directly from the higher dimensional geometry and thus can more efficiently follow the user selected local features in time. In addition, by analyzing the resulting higher dimensional geometry, it becomes easier to detect important topological events and the corresponding critical time steps for the selected features. With our algorithm, the user can interact with the underlying time-varying data more easily. The computation cost for performing time-varying volume tracking is also minimized.
从随时间变化的体积数据中跟踪和可视化局部特征,使用户可以在空间和时间上专注于感兴趣的选定区域,这可以更好地理解潜在的动态。本文提出了一种利用高维等值面跟踪时变等值面和区间体积的有效算法。该算法不是在多个时间步长中分别提取等值面或区间体等数据特征并计算这些特征之间的空间对应关系,而是直接从高维几何中提取对应关系,从而可以更有效地跟踪用户选择的局部特征。此外,通过分析得到的高维几何图形,可以更容易地检测出重要的拓扑事件和所选特征的相应关键时间步长。使用我们的算法,用户可以更容易地与底层时变数据进行交互。执行时变体积跟踪的计算成本也被最小化。
{"title":"Volume tracking using higher dimensional isosurfacing","authors":"Guangfeng Ji, Han-Wei Shen, R. Wenger","doi":"10.1109/VISUAL.2003.1250374","DOIUrl":"https://doi.org/10.1109/VISUAL.2003.1250374","url":null,"abstract":"Tracking and visualizing local features from a time-varying volumetric data allows the user to focus on selected regions of interest, both in space and time, which can lead to a better understanding of the underlying dynamics. In this paper, we present an efficient algorithm to track time-varying isosurfaces and interval volumes using isosurfacing in higher dimensions. Instead of extracting the data features such as isosurfaces or interval volumes separately from multiple time steps and computing the spatial correspondence between those features, our algorithm extracts the correspondence directly from the higher dimensional geometry and thus can more efficiently follow the user selected local features in time. In addition, by analyzing the resulting higher dimensional geometry, it becomes easier to detect important topological events and the corresponding critical time steps for the selected features. With our algorithm, the user can interact with the underlying time-varying data more easily. The computation cost for performing time-varying volume tracking is also minimized.","PeriodicalId":372131,"journal":{"name":"IEEE Visualization, 2003. VIS 2003.","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123737697","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 75
Acceleration techniques for GPU-based volume rendering 基于gpu的体渲染加速技术
Pub Date : 2003-10-22 DOI: 10.1109/VIS.2003.10001
J. Krüger, R. Westermann
Nowadays, direct volume rendering via 3D textures has positioned itself as an efficient tool for the display and visual analysis of volumetric scalar fields. It is commonly accepted, that for reasonably sized data sets appropriate quality at interactive rates can be achieved by means of this technique. However, despite these benefits one important issue has received little attention throughout the ongoing discussion of texture based volume rendering: the integration of acceleration techniques to reduce per-fragment operations. In this paper, we address the integration of early ray termination and empty-space skipping into texture based volume rendering on graphical processing units (GPU). Therefore, we describe volume ray-casting on programmable graphics hardware as an alternative to object-order approaches. We exploit the early z-test to terminate fragment processing once sufficient opacity has been accumulated, and to skip empty space along the rays of sight. We demonstrate performance gains up to a factor of 3 for typical renditions of volumetric data sets on the ATI 9700 graphics card.
如今,通过3D纹理的直接体绘制已经将自己定位为显示和可视化分析体积标量场的有效工具。人们普遍认为,对于合理规模的数据集,通过这种技术可以获得交互速率下的适当质量。然而,尽管有这些好处,一个重要的问题在正在进行的基于纹理的体渲染的讨论中很少受到关注:集成加速技术来减少每个片段的操作。在本文中,我们讨论了在图形处理单元(GPU)上将早期射线终止和空空间跳过集成到基于纹理的体渲染中。因此,我们将可编程图形硬件上的体射线投射描述为对象顺序方法的替代方案。我们利用早期的z测试,一旦积累了足够的不透明度,就终止碎片处理,并跳过沿着视线的空白空间。我们在ATI 9700显卡上演示了典型的体积数据集的性能增益,最高可达3倍。
{"title":"Acceleration techniques for GPU-based volume rendering","authors":"J. Krüger, R. Westermann","doi":"10.1109/VIS.2003.10001","DOIUrl":"https://doi.org/10.1109/VIS.2003.10001","url":null,"abstract":"Nowadays, direct volume rendering via 3D textures has positioned itself as an efficient tool for the display and visual analysis of volumetric scalar fields. It is commonly accepted, that for reasonably sized data sets appropriate quality at interactive rates can be achieved by means of this technique. However, despite these benefits one important issue has received little attention throughout the ongoing discussion of texture based volume rendering: the integration of acceleration techniques to reduce per-fragment operations. In this paper, we address the integration of early ray termination and empty-space skipping into texture based volume rendering on graphical processing units (GPU). Therefore, we describe volume ray-casting on programmable graphics hardware as an alternative to object-order approaches. We exploit the early z-test to terminate fragment processing once sufficient opacity has been accumulated, and to skip empty space along the rays of sight. We demonstrate performance gains up to a factor of 3 for typical renditions of volumetric data sets on the ATI 9700 graphics card.","PeriodicalId":372131,"journal":{"name":"IEEE Visualization, 2003. VIS 2003.","volume":"98 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122589132","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 931
Planet-sized batched dynamic adaptive meshes (P-BDAM) 行星级批量动态自适应网格(P-BDAM)
Pub Date : 2003-10-22 DOI: 10.1109/VISUAL.2003.1250366
Paolo Cignoni, F. Ganovelli, E. Gobbetti, F. Marton, F. Ponchio, Roberto Scopigno
We describe an efficient technique for out-of-core management and interactive rendering of planet sized textured terrain surfaces. The technique, called planet-sized batched dynamic adaptive meshes (P-BDAM), extends the BDAM approach by using as basic primitive a general triangulation of points on a displaced triangle. The proposed framework introduces several advances with respect to the state of the art: thanks to a batched host-to-graphics communication model, we outperform current adaptive tessellation solutions in terms of rendering speed; we guarantee overall geometric continuity, exploiting programmable graphics hardware to cope with the accuracy issues introduced by single precision floating points; we exploit a compressed out of core representation and speculative prefetching for hiding disk latency during rendering of out-of-core data; we efficiently construct high quality simplified representations with a novel distributed out of core simplification algorithm working on a standard PC network.
我们描述了一种有效的技术,用于核外管理和行星大小的纹理地形表面的交互式渲染。该技术被称为行星大小的批量动态自适应网格(P-BDAM),它扩展了BDAM方法,使用位移三角形上点的一般三角剖分作为基本基元。所提出的框架引入了几个关于艺术状态的进步:由于批量主机到图形通信模型,我们在渲染速度方面优于当前的自适应镶嵌解决方案;我们保证整体几何连续性,利用可编程图形硬件来应对单精度浮点数带来的精度问题;我们利用压缩的内核外表示和推测预取来隐藏内核外数据渲染过程中的磁盘延迟;我们使用一种新的分布式核心外简化算法在标准PC网络上高效地构建了高质量的简化表示。
{"title":"Planet-sized batched dynamic adaptive meshes (P-BDAM)","authors":"Paolo Cignoni, F. Ganovelli, E. Gobbetti, F. Marton, F. Ponchio, Roberto Scopigno","doi":"10.1109/VISUAL.2003.1250366","DOIUrl":"https://doi.org/10.1109/VISUAL.2003.1250366","url":null,"abstract":"We describe an efficient technique for out-of-core management and interactive rendering of planet sized textured terrain surfaces. The technique, called planet-sized batched dynamic adaptive meshes (P-BDAM), extends the BDAM approach by using as basic primitive a general triangulation of points on a displaced triangle. The proposed framework introduces several advances with respect to the state of the art: thanks to a batched host-to-graphics communication model, we outperform current adaptive tessellation solutions in terms of rendering speed; we guarantee overall geometric continuity, exploiting programmable graphics hardware to cope with the accuracy issues introduced by single precision floating points; we exploit a compressed out of core representation and speculative prefetching for hiding disk latency during rendering of out-of-core data; we efficiently construct high quality simplified representations with a novel distributed out of core simplification algorithm working on a standard PC network.","PeriodicalId":372131,"journal":{"name":"IEEE Visualization, 2003. VIS 2003.","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128650904","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 165
Visibility culling using plenoptic opacity functions for large volume visualization 使用全光不透明函数进行大体积可视化的可见性剔除
Pub Date : 2003-10-22 DOI: 10.1109/VISUAL.2003.1250391
Jinzhu Gao, Jian Huang, Han-Wei Shen, J. Kohl
Visibility culling has the potential to accelerate large data visualization in significant ways. Unfortunately, existing algorithms do not scale well when parallelized, and require full re-computation whenever the opacity transfer function is modified. To address these issues, we have designed a Plenoptic Opacity Function (POF) scheme to encode the view-dependent opacity of a volume block. POFs are computed off-line during a pre-processing stage, only once for each block. We show that using POFs is (i) an efficient, conservative and effective way to encode the opacity variations of a volume block for a range of views, (ii) flexible for re-use by a family of opacity transfer functions without the need for additional off-line processing, and (iii) highly scalable for use in massively parallel implementations. Our results confirm the efficacy of POFs for visibility culling in large-scale parallel volume rendering; we can interactively render the Visible Woman dataset using software ray-casting on 32 processors, with interactive modification of the opacity transfer function on-the-fly.
可见性剔除有可能以重要的方式加速大型数据可视化。不幸的是,现有的算法在并行化时不能很好地扩展,并且每当修改不透明度传递函数时都需要完全重新计算。为了解决这些问题,我们设计了一个全光学不透明度函数(POF)方案来编码卷块的视图依赖的不透明度。在预处理阶段脱机计算pof,每个块只计算一次。我们表明,使用pof是(i)一种高效,保守和有效的方式来编码一个卷块的不透明度变化的范围内的视图,(ii)灵活的不透明度传递函数家族的重用,而不需要额外的离线处理,(iii)高度可扩展,用于大规模并行实现。我们的研究结果证实了pof在大规模并行体绘制中可见度剔除的有效性;我们可以在32个处理器上使用软件光线投射交互渲染可视女人数据集,并实时交互修改不透明度传递函数。
{"title":"Visibility culling using plenoptic opacity functions for large volume visualization","authors":"Jinzhu Gao, Jian Huang, Han-Wei Shen, J. Kohl","doi":"10.1109/VISUAL.2003.1250391","DOIUrl":"https://doi.org/10.1109/VISUAL.2003.1250391","url":null,"abstract":"Visibility culling has the potential to accelerate large data visualization in significant ways. Unfortunately, existing algorithms do not scale well when parallelized, and require full re-computation whenever the opacity transfer function is modified. To address these issues, we have designed a Plenoptic Opacity Function (POF) scheme to encode the view-dependent opacity of a volume block. POFs are computed off-line during a pre-processing stage, only once for each block. We show that using POFs is (i) an efficient, conservative and effective way to encode the opacity variations of a volume block for a range of views, (ii) flexible for re-use by a family of opacity transfer functions without the need for additional off-line processing, and (iii) highly scalable for use in massively parallel implementations. Our results confirm the efficacy of POFs for visibility culling in large-scale parallel volume rendering; we can interactively render the Visible Woman dataset using software ray-casting on 32 processors, with interactive modification of the opacity transfer function on-the-fly.","PeriodicalId":372131,"journal":{"name":"IEEE Visualization, 2003. VIS 2003.","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127329925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 44
Piecewise C/sup 1/ continuous surface reconstruction of noisy point clouds via local implicit quadric regression 局部隐式二次回归对噪声点云的分段C/sup /连续曲面重建
Pub Date : 2003-10-22 DOI: 10.1109/VISUAL.2003.1250359
Hui Xie, Jianning Wang, Jing Hua, Hong Qin, A. Kaufman
This paper addresses the problem of surface reconstruction of highly noisy point clouds. The surfaces to be reconstructed are assumed to be 2-manifolds of piecewise C/sup 1/ continuity, with isolated small irregular regions of high curvature, sophisticated local topology or abrupt burst of noise. At each sample point, a quadric field is locally fitted via a modified moving least squares method. These locally fitted quadric fields are then blended together to produce a pseudo-signed distance field using Shepard's method. We introduce a prioritized front growing scheme in the process of local quadrics fitting. Flatter surface areas tend to grow faster. The already fitted regions will subsequently guide the fitting of those irregular regions in their neighborhood.
本文研究了高噪声点云的表面重建问题。假设重构曲面为C/sup /连续性的2流形,具有孤立的高曲率小不规则区域、复杂的局部拓扑或突发性噪声。在每个采样点上,通过改进的移动最小二乘法局部拟合二次场。然后使用Shepard方法将这些局部拟合的二次场混合在一起产生伪符号距离场。在局部二次拟合过程中引入了一种优先生长方案。更平坦的地表往往生长得更快。已经拟合的区域随后将引导其邻近的不规则区域的拟合。
{"title":"Piecewise C/sup 1/ continuous surface reconstruction of noisy point clouds via local implicit quadric regression","authors":"Hui Xie, Jianning Wang, Jing Hua, Hong Qin, A. Kaufman","doi":"10.1109/VISUAL.2003.1250359","DOIUrl":"https://doi.org/10.1109/VISUAL.2003.1250359","url":null,"abstract":"This paper addresses the problem of surface reconstruction of highly noisy point clouds. The surfaces to be reconstructed are assumed to be 2-manifolds of piecewise C/sup 1/ continuity, with isolated small irregular regions of high curvature, sophisticated local topology or abrupt burst of noise. At each sample point, a quadric field is locally fitted via a modified moving least squares method. These locally fitted quadric fields are then blended together to produce a pseudo-signed distance field using Shepard's method. We introduce a prioritized front growing scheme in the process of local quadrics fitting. Flatter surface areas tend to grow faster. The already fitted regions will subsequently guide the fitting of those irregular regions in their neighborhood.","PeriodicalId":372131,"journal":{"name":"IEEE Visualization, 2003. VIS 2003.","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125427985","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 67
A framework for sample-based rendering with O-buffers 一个使用o缓冲区的基于样本的渲染框架
Pub Date : 2003-10-22 DOI: 10.1109/VISUAL.2003.1250405
Huamin Qu, A. Kaufman, R. Shao, Ankush Kumar
We present an innovative modeling and rendering primitive, called the O-buffer, for sample-based graphics, such as images, volumes and points. The 2D or 3D O-buffer is in essence a conventional image or a volume, respectively, except that samples are not restricted to a regular grid. A sample position in the O-buffer is recorded as an offset to the nearest grid point of a regular base grid (hence the name O-buffer). The offset is typically quantized for compact representation and efficient rendering. The O-buffer emancipates pixels and voxels from the regular grids and can greatly improve the modeling power of images and volumes. It is a semi-regular structure which lends itself to efficient construction and rendering. Image quality can be improved by storing more spatial information with samples and by avoiding multiple resamplings and delaying reconstruction to the final rendering stage. Using O-buffers, more accurate multi-resolution representations can be developed for images and volumes. It can also be exploited to represent and render unstructured primitives, such as points, particles, curvilinear or irregular volumes. The O-buffer is therefore a uniform representation for a variety of graphics primitives and supports mixing them in the same scene. We demonstrate the effectiveness of the O-buffer with hierarchical O-buffers, layered depth O-buffers, and hybrid volume rendering with O-buffers.
我们提出了一种创新的建模和渲染原语,称为O-buffer,用于基于样本的图形,如图像,体积和点。2D或3D O-buffer本质上分别是传统的图像或体积,只是样本不受规则网格的限制。o缓冲区中的样本位置被记录为距离规则基网格最近的网格点的偏移量(因此称为o缓冲区)。为了紧凑的表示和高效的呈现,偏移量通常被量化。O-buffer将像素和体素从规则网格中解放出来,大大提高了图像和体的建模能力。它是一种半规则结构,有助于高效的施工和渲染。通过使用样本存储更多的空间信息,避免多次重采样和延迟重建到最终渲染阶段,可以提高图像质量。使用o缓冲区,可以为图像和体积开发更精确的多分辨率表示。它也可以用来表示和呈现非结构化的原语,如点、粒子、曲线或不规则的体积。因此,O-buffer是各种图形原语的统一表示,并支持在同一场景中混合它们。我们演示了O-buffer与分层O-buffer、分层深度O-buffer和混合体渲染O-buffer的有效性。
{"title":"A framework for sample-based rendering with O-buffers","authors":"Huamin Qu, A. Kaufman, R. Shao, Ankush Kumar","doi":"10.1109/VISUAL.2003.1250405","DOIUrl":"https://doi.org/10.1109/VISUAL.2003.1250405","url":null,"abstract":"We present an innovative modeling and rendering primitive, called the O-buffer, for sample-based graphics, such as images, volumes and points. The 2D or 3D O-buffer is in essence a conventional image or a volume, respectively, except that samples are not restricted to a regular grid. A sample position in the O-buffer is recorded as an offset to the nearest grid point of a regular base grid (hence the name O-buffer). The offset is typically quantized for compact representation and efficient rendering. The O-buffer emancipates pixels and voxels from the regular grids and can greatly improve the modeling power of images and volumes. It is a semi-regular structure which lends itself to efficient construction and rendering. Image quality can be improved by storing more spatial information with samples and by avoiding multiple resamplings and delaying reconstruction to the final rendering stage. Using O-buffers, more accurate multi-resolution representations can be developed for images and volumes. It can also be exploited to represent and render unstructured primitives, such as points, particles, curvilinear or irregular volumes. The O-buffer is therefore a uniform representation for a variety of graphics primitives and supports mixing them in the same scene. We demonstrate the effectiveness of the O-buffer with hierarchical O-buffers, layered depth O-buffers, and hybrid volume rendering with O-buffers.","PeriodicalId":372131,"journal":{"name":"IEEE Visualization, 2003. VIS 2003.","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114914577","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Using deformations for browsing volumetric data 使用变形来浏览体积数据
Pub Date : 2003-10-22 DOI: 10.1109/VISUAL.2003.1250400
Michael J. McGuffin, Liviu Tancau, Ravin Balakrishnan
Many traditional techniques for "looking inside" volumetric data involve removing portions of the data, for example using various cutting tools, to reveal the interior. This allows the user to see hidden parts of the data, but has the disadvantage of removing potentially important surrounding contextual information. We explore an alternate strategy for browsing that uses deformations, where the user can cut into and open up, spread apart, or peel away parts of the volume in real time, making the interior visible while still retaining surrounding context. We consider various deformation strategies and present a number of interaction techniques based on different metaphors. Our designs pay special attention to the semantic layers that might compose a volume (e.g. the skin, muscle, bone in a scan of a human). Users can apply deformations to only selected layers, or apply a given deformation to a different degree to each layer, making browsing more flexible and facilitating the visualization of relationships between layers. Our interaction techniques are controlled with direct, "in place" manipulation, using pop-up menus and 3D widgets, to avoid the divided attention and awkwardness that would come with panels of traditional widgets. Initial user feedback indicates that our techniques are valuable, especially for showing portions of the data spatially situated in context with surrounding data.
许多“看内部”体积数据的传统技术涉及去除部分数据,例如使用各种切割工具,以显示内部。这允许用户看到数据的隐藏部分,但缺点是删除了可能重要的周围上下文信息。我们探索了一种使用变形的替代浏览策略,用户可以实时切割和打开,分散或剥离部分体量,使内部可见,同时仍然保留周围的环境。我们考虑了各种变形策略,并提出了一些基于不同隐喻的交互技术。我们的设计特别关注可能构成一个体积的语义层(例如,人体扫描中的皮肤、肌肉、骨骼)。用户可以只对选定的图层应用变形,或者对每个图层应用不同程度的给定变形,从而使浏览更加灵活,并促进图层之间关系的可视化。我们的交互技术是通过直接的“就地”操作来控制的,使用弹出菜单和3D小部件,以避免传统小部件面板带来的分散注意力和尴尬。最初的用户反馈表明,我们的技术是有价值的,特别是在显示部分数据的空间位置与周围数据的上下文中。
{"title":"Using deformations for browsing volumetric data","authors":"Michael J. McGuffin, Liviu Tancau, Ravin Balakrishnan","doi":"10.1109/VISUAL.2003.1250400","DOIUrl":"https://doi.org/10.1109/VISUAL.2003.1250400","url":null,"abstract":"Many traditional techniques for \"looking inside\" volumetric data involve removing portions of the data, for example using various cutting tools, to reveal the interior. This allows the user to see hidden parts of the data, but has the disadvantage of removing potentially important surrounding contextual information. We explore an alternate strategy for browsing that uses deformations, where the user can cut into and open up, spread apart, or peel away parts of the volume in real time, making the interior visible while still retaining surrounding context. We consider various deformation strategies and present a number of interaction techniques based on different metaphors. Our designs pay special attention to the semantic layers that might compose a volume (e.g. the skin, muscle, bone in a scan of a human). Users can apply deformations to only selected layers, or apply a given deformation to a different degree to each layer, making browsing more flexible and facilitating the visualization of relationships between layers. Our interaction techniques are controlled with direct, \"in place\" manipulation, using pop-up menus and 3D widgets, to avoid the divided attention and awkwardness that would come with panels of traditional widgets. Initial user feedback indicates that our techniques are valuable, especially for showing portions of the data spatially situated in context with surrounding data.","PeriodicalId":372131,"journal":{"name":"IEEE Visualization, 2003. VIS 2003.","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117225787","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 218
Empty space skipping and occlusion clipping for texture-based volume rendering 基于纹理的体渲染的空白空间跳过和遮挡裁剪
Pub Date : 2003-10-22 DOI: 10.1109/VISUAL.2003.1250388
Wei Li, K. Mueller, A. Kaufman
We propose methods to accelerate texture-based volume rendering by skipping invisible voxels. We partition the volume into sub-volumes, each containing voxels with similar properties. Sub-volumes composed of only voxels mapped to empty by the transfer function are skipped. To render the adaptively partitioned sub-volumes in visibility order, we reorganize them into an orthogonal BSP tree. We also present an algorithm that computes incrementally the intersection of the volume with the slicing planes, which avoids the overhead of the intersection and texture coordinates computation introduced by the partitioning. Rendering with empty space skipping is 2 to 5 times faster than without it. To skip occluded voxels, we introduce the concept of orthogonal opacity map, that simplifies the transformation between the volume coordinates and the opacity map coordinates, which is intensively used for occlusion detection. The map is updated efficiently by the GPU. The sub-volumes are then culled and clipped against the opacity map. We also present a method that adaptively adjusts the optimal number of the opacity map updates. With occlusion clipping, about 60% of non-empty voxels can be skipped and an additional 80% speedup on average is gained for iso-surface-like rendering.
我们提出了通过跳过不可见体素来加速基于纹理的体渲染的方法。我们将体积划分为子体积,每个子体积包含具有相似属性的体素。仅由传递函数映射为空的体素组成的子卷将被跳过。为了使自适应划分的子体按可见性顺序呈现,我们将它们重组成一棵正交的BSP树。我们还提出了一种增量计算体与切片平面交点的算法,避免了分割带来的交点和纹理坐标计算的开销。使用空格跳转的渲染速度比不使用它的渲染速度快2到5倍。为了跳过遮挡体素,我们引入了正交不透明度图的概念,该概念简化了体坐标与不透明度图坐标之间的转换,在遮挡检测中被广泛使用。通过GPU有效地更新地图。然后,根据不透明度地图对子体量进行剔除和裁剪。我们还提出了一种自适应调整最优不透明贴图更新次数的方法。通过遮挡裁剪,可以跳过大约60%的非空体素,并且对于类似于等表面的渲染,平均可以获得额外80%的加速。
{"title":"Empty space skipping and occlusion clipping for texture-based volume rendering","authors":"Wei Li, K. Mueller, A. Kaufman","doi":"10.1109/VISUAL.2003.1250388","DOIUrl":"https://doi.org/10.1109/VISUAL.2003.1250388","url":null,"abstract":"We propose methods to accelerate texture-based volume rendering by skipping invisible voxels. We partition the volume into sub-volumes, each containing voxels with similar properties. Sub-volumes composed of only voxels mapped to empty by the transfer function are skipped. To render the adaptively partitioned sub-volumes in visibility order, we reorganize them into an orthogonal BSP tree. We also present an algorithm that computes incrementally the intersection of the volume with the slicing planes, which avoids the overhead of the intersection and texture coordinates computation introduced by the partitioning. Rendering with empty space skipping is 2 to 5 times faster than without it. To skip occluded voxels, we introduce the concept of orthogonal opacity map, that simplifies the transformation between the volume coordinates and the opacity map coordinates, which is intensively used for occlusion detection. The map is updated efficiently by the GPU. The sub-volumes are then culled and clipped against the opacity map. We also present a method that adaptively adjusts the optimal number of the opacity map updates. With occlusion clipping, about 60% of non-empty voxels can be skipped and an additional 80% speedup on average is gained for iso-surface-like rendering.","PeriodicalId":372131,"journal":{"name":"IEEE Visualization, 2003. VIS 2003.","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132212387","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 146
Herarchical splatting of scattered data 分散数据的分层飞溅
Pub Date : 2003-10-22 DOI: 10.1109/VISUAL.2003.1250404
M. Hopf, T. Ertl
Numerical particle simulations and astronomical observations create huge data sets containing uncorrelated 3D points of varying size. These data sets cannot be visualized interactively by simply rendering millions of colored points for each frame. Therefore, in many visualization applications a scalar density corresponding to the point distribution is resampled on a regular grid for direct volume rendering. However, many fine details are usually lost for voxel resolutions which still allow interactive visualization on standard workstations. Since no surface geometry is associated with our data sets, the recently introduced point-based rendering algorithms cannot be applied as well. In this paper we propose to accelerate the visualization of scattered point data by a hierarchical data structure based on a PCA clustering procedure. By traversing this structure for each frame we can trade-off rendering speed vs. image quality. Our scheme also reduces memory consumption by using quantized relative coordinates and it allows for fast sorting of semi-transparent clusters. We analyze various software and hardware implementations of our renderer and demonstrate that we can now visualize data sets with tens of millions of points interactively with sub-pixel screen space error on current PC graphics hardware employing advanced vertex shader functionality.
数值粒子模拟和天文观测产生了巨大的数据集,其中包含不同大小的不相关的3D点。这些数据集不能通过简单地为每帧渲染数百万个彩色点来交互式地可视化。因此,在许多可视化应用中,点分布对应的标量密度在规则网格上重新采样以进行直接体绘制。然而,体素分辨率通常会丢失许多细节,这仍然允许在标准工作站上进行交互式可视化。由于没有与我们的数据集相关联的表面几何,最近引入的基于点的渲染算法也不能应用。本文提出了一种基于主成分分析聚类过程的分层数据结构来加速散乱点数据的可视化。通过对每一帧遍历这个结构,我们可以权衡渲染速度和图像质量。我们的方案还通过使用量化的相对坐标减少了内存消耗,并允许对半透明簇进行快速排序。我们分析了我们的渲染器的各种软件和硬件实现,并证明我们现在可以使用先进的顶点着色器功能,在当前PC图形硬件上与亚像素屏幕空间错误交互地可视化具有数千万点的数据集。
{"title":"Herarchical splatting of scattered data","authors":"M. Hopf, T. Ertl","doi":"10.1109/VISUAL.2003.1250404","DOIUrl":"https://doi.org/10.1109/VISUAL.2003.1250404","url":null,"abstract":"Numerical particle simulations and astronomical observations create huge data sets containing uncorrelated 3D points of varying size. These data sets cannot be visualized interactively by simply rendering millions of colored points for each frame. Therefore, in many visualization applications a scalar density corresponding to the point distribution is resampled on a regular grid for direct volume rendering. However, many fine details are usually lost for voxel resolutions which still allow interactive visualization on standard workstations. Since no surface geometry is associated with our data sets, the recently introduced point-based rendering algorithms cannot be applied as well. In this paper we propose to accelerate the visualization of scattered point data by a hierarchical data structure based on a PCA clustering procedure. By traversing this structure for each frame we can trade-off rendering speed vs. image quality. Our scheme also reduces memory consumption by using quantized relative coordinates and it allows for fast sorting of semi-transparent clusters. We analyze various software and hardware implementations of our renderer and demonstrate that we can now visualize data sets with tens of millions of points interactively with sub-pixel screen space error on current PC graphics hardware employing advanced vertex shader functionality.","PeriodicalId":372131,"journal":{"name":"IEEE Visualization, 2003. VIS 2003.","volume":"157 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123273354","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 67
期刊
IEEE Visualization, 2003. VIS 2003.
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1