首页 > 最新文献

IEEE Symposium on Volume Visualization (Cat. No.989EX300)最新文献

英文 中文
A real-time volume rendering architecture using an adaptive resampling scheme for parallel and perspective projections 使用自适应重采样方案的实时体绘制架构,用于并行和透视投影
Pub Date : 1998-10-01 DOI: 10.1145/288126.288146
M. Ogata, T. Ohkami, H. Lauer, H. Pfister
The paper describes an object order real time volume rendering architecture using an adaptive resampling scheme to perform resampling operations in a unified parallel pipeline manner for both parallel and perspective projections. Unlike parallel projections, perspective projections require a variable resampling structure due to diverging perspective rays. In order to address this issue, we propose an adaptive pipelined convolution block for resampling operations using the level of resolution to keep the parallel pipeline structure regular. We also propose to use multi resolution datasets prepared for different levels of grid resolution to bound the convolution operations. The proposed convolution block is organized using a systolic array structure, which works well with a distributed skewed memory for conflict free accesses of voxels. We present the results of some experiments with our software simulators of the proposed architecture and discuss important technical issues.
本文描述了一种采用自适应重采样方案的对象顺序实时体绘制体系结构,以统一的并行管道方式对并行投影和透视投影进行重采样操作。与平行投影不同,由于透视光线发散,透视投影需要可变的重采样结构。为了解决这个问题,我们提出了一种自适应的流水线卷积块,利用分辨率水平来进行重采样操作,以保持并行流水线结构的规则性。我们还建议使用针对不同网格分辨率级别准备的多分辨率数据集来约束卷积操作。所提出的卷积块使用收缩数组结构组织,它可以很好地与分布式倾斜内存一起用于无冲突的体素访问。我们给出了我们的软件模拟器的一些实验结果,并讨论了重要的技术问题。
{"title":"A real-time volume rendering architecture using an adaptive resampling scheme for parallel and perspective projections","authors":"M. Ogata, T. Ohkami, H. Lauer, H. Pfister","doi":"10.1145/288126.288146","DOIUrl":"https://doi.org/10.1145/288126.288146","url":null,"abstract":"The paper describes an object order real time volume rendering architecture using an adaptive resampling scheme to perform resampling operations in a unified parallel pipeline manner for both parallel and perspective projections. Unlike parallel projections, perspective projections require a variable resampling structure due to diverging perspective rays. In order to address this issue, we propose an adaptive pipelined convolution block for resampling operations using the level of resolution to keep the parallel pipeline structure regular. We also propose to use multi resolution datasets prepared for different levels of grid resolution to bound the convolution operations. The proposed convolution block is organized using a systolic array structure, which works well with a distributed skewed memory for conflict free accesses of voxels. We present the results of some experiments with our software simulators of the proposed architecture and discuss important technical issues.","PeriodicalId":167141,"journal":{"name":"IEEE Symposium on Volume Visualization (Cat. No.989EX300)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124258086","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Coloring voxel-based objects for virtual endoscopy 用于虚拟内窥镜的基于体素的对象着色
Pub Date : 1998-10-01 DOI: 10.1145/288126.288140
Omer Shibolet, D. Cohen-Or
The paper describes a method for coloring voxel based models. The method generalizes the two-part texture mapping technique to color non convex objects in a more natural way. The method was developed for coloring internal cavities for the application of virtual endoscopy, where the surfaces are shaped like a general cylinder in the macro level, but with folds and bumps in the more detailed levels. Given a flat texture, the coloring method defines a mapping between the 3D surface and the texture which reflects the tensions of the points on the surface. The core of the method is a technique for mapping such non convex surfaces to convex ones. The new technique is based on a discrete dilation process that is fast and robust, and bypasses many of the numerical problems common to previous methods.
本文描述了一种基于体素的模型上色方法。该方法将两部分纹理映射技术推广到以更自然的方式对非凸物体上色。该方法是为虚拟内窥镜应用的内部腔着色而开发的,其中表面在宏观层面上形状像一般的圆柱体,但在更详细的层面上有褶皱和凸起。给定一个平面纹理,着色方法定义了3D表面和纹理之间的映射,该映射反映了表面上点的张力。该方法的核心是一种将非凸曲面映射到凸曲面的技术。新技术是基于离散膨胀过程,快速和鲁棒性,并绕过了许多常见的数值问题,以前的方法。
{"title":"Coloring voxel-based objects for virtual endoscopy","authors":"Omer Shibolet, D. Cohen-Or","doi":"10.1145/288126.288140","DOIUrl":"https://doi.org/10.1145/288126.288140","url":null,"abstract":"The paper describes a method for coloring voxel based models. The method generalizes the two-part texture mapping technique to color non convex objects in a more natural way. The method was developed for coloring internal cavities for the application of virtual endoscopy, where the surfaces are shaped like a general cylinder in the macro level, but with folds and bumps in the more detailed levels. Given a flat texture, the coloring method defines a mapping between the 3D surface and the texture which reflects the tensions of the points on the surface. The core of the method is a technique for mapping such non convex surfaces to convex ones. The new technique is based on a discrete dilation process that is fast and robust, and bypasses many of the numerical problems common to previous methods.","PeriodicalId":167141,"journal":{"name":"IEEE Symposium on Volume Visualization (Cat. No.989EX300)","volume":"88 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122327597","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Opacity-weighted color interpolation for volume sampling 体积采样的不透明度加权颜色插值
Pub Date : 1998-10-01 DOI: 10.1145/288126.288186
C. Wittenbrink, T. Malzbender, Michael E. Goss
Volume rendering creates images from sampled volumetric data. The compute intensive nature of volume rendering has driven research in algorithm optimization. An important speed optimization is the use of preclassification and preshading. The authors demonstrate an artifact that results when interpolating from preclassified or preshaded colors and opacity values separately. This method is flawed, leading to visible artifacts. They present an improved technique, opacity-weighted color interpolation, evaluate the RMS error improvement, hardware and algorithm efficiency, and demonstrated improvements. They show analytically that opacity-weighted color interpolation exactly reproduces material based interpolation results for certain volume classifiers, with the efficiencies of preclassification. The proposed technique may also have broad impact on opacity-texture-mapped polygon rendering.
体绘制从采样的体积数据创建图像。体绘制的计算密集型特性推动了算法优化的研究。一个重要的速度优化是使用预分类和预分割。作者演示了一个伪影,当从预分类或预着色的颜色和不透明度值分别插值时产生。这种方法是有缺陷的,会导致可见的工件。他们提出了一种改进的技术,不透明度加权颜色插值,评估RMS误差改进,硬件和算法效率,并演示了改进。分析表明,对于某些体积分类器,不透明度加权颜色插值准确地再现了基于材料的插值结果,具有预分类的效率。该技术也可能对不透明纹理映射多边形的绘制产生广泛的影响。
{"title":"Opacity-weighted color interpolation for volume sampling","authors":"C. Wittenbrink, T. Malzbender, Michael E. Goss","doi":"10.1145/288126.288186","DOIUrl":"https://doi.org/10.1145/288126.288186","url":null,"abstract":"Volume rendering creates images from sampled volumetric data. The compute intensive nature of volume rendering has driven research in algorithm optimization. An important speed optimization is the use of preclassification and preshading. The authors demonstrate an artifact that results when interpolating from preclassified or preshaded colors and opacity values separately. This method is flawed, leading to visible artifacts. They present an improved technique, opacity-weighted color interpolation, evaluate the RMS error improvement, hardware and algorithm efficiency, and demonstrated improvements. They show analytically that opacity-weighted color interpolation exactly reproduces material based interpolation results for certain volume classifiers, with the efficiencies of preclassification. The proposed technique may also have broad impact on opacity-texture-mapped polygon rendering.","PeriodicalId":167141,"journal":{"name":"IEEE Symposium on Volume Visualization (Cat. No.989EX300)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125678829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 87
Adaptive perspective ray casting 自适应透视光线投射
Pub Date : 1998-10-01 DOI: 10.1145/288126.288154
K. Kreeger, I. Bitter, F. Dachille, Baoquan Chen, A. Kaufman
We present a method to accurately and efficiently perform perspective volumetric ray casting of uniform regular datasets, called Exponential-Region (ER) Perspective. Unlike previous methods which undersample, oversample, or approximate the data, our method near uniformly samples the data throughout the viewing volume. In addition, it gains algorithmic advantages from a regular sampling pattern and cache-coherent read access, making it an algorithm well suited for implementation on hardware architectures for volume rendering. We qualify the algorithm by its filtering characteristics and demonstrate its effectiveness by contrasting its antialiasing quality and timing with other perspective ray casting methods.
我们提出了一种准确有效地对均匀规则数据集进行透视体射线投射的方法,称为指数区域透视。与以前的欠采样、过采样或近似数据的方法不同,我们的方法在整个观看体积中几乎均匀地采样数据。此外,它从常规采样模式和缓存一致读访问中获得算法优势,使其非常适合在硬件架构上实现体呈现。我们通过其滤波特性来验证该算法的有效性,并将其与其他透视光线投射方法的抗混叠质量和时序进行对比。
{"title":"Adaptive perspective ray casting","authors":"K. Kreeger, I. Bitter, F. Dachille, Baoquan Chen, A. Kaufman","doi":"10.1145/288126.288154","DOIUrl":"https://doi.org/10.1145/288126.288154","url":null,"abstract":"We present a method to accurately and efficiently perform perspective volumetric ray casting of uniform regular datasets, called Exponential-Region (ER) Perspective. Unlike previous methods which undersample, oversample, or approximate the data, our method near uniformly samples the data throughout the viewing volume. In addition, it gains algorithmic advantages from a regular sampling pattern and cache-coherent read access, making it an algorithm well suited for implementation on hardware architectures for volume rendering. We qualify the algorithm by its filtering characteristics and demonstrate its effectiveness by contrasting its antialiasing quality and timing with other perspective ray casting methods.","PeriodicalId":167141,"journal":{"name":"IEEE Symposium on Volume Visualization (Cat. No.989EX300)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123917262","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 34
Probabilistic segmentation of volume data for visualization using SOM-PNN classifier 基于SOM-PNN分类器的体数据可视化概率分割
Pub Date : 1998-10-01 DOI: 10.1145/288126.288162
Feng Ma, Wenping Wang, W. W. Tsang, Zesheng Tang, Shaowei Xia, Xin Tong
We present a new probabilistic classifier, called SOM-PNN classifier, for volume data classification and visualization. The new classifier produces probabilistic classification with Bayesian confidence measure which is highly desirable in volume rendering. Based on the SOM map trained with a large training data set, our SOM-PNN classifier performs the probabilistic classification using the PNN algorithm. This combined use of SOM and PNN overcomes the shortcomings of the parametric methods, the nonparametric methods, and the SOM method. The proposed SOM-PNN classifier has been used to segment the CT sloth data and the 20 human MRI brain volumes resulting in much more informative 3D rendering with more details and less artifacts than other methods. Numerical comparisons demonstrate that the SOM-PNN classifier is a fast, accurate and probabilistic classifier for volume rendering.
我们提出了一种新的概率分类器,称为SOM-PNN分类器,用于体数据的分类和可视化。该分类器使用贝叶斯置信度进行概率分类,这在体绘制中是非常理想的。基于大量训练数据集训练的SOM地图,我们的SOM-PNN分类器使用PNN算法执行概率分类。SOM和PNN的结合使用克服了参数方法、非参数方法和SOM方法的缺点。所提出的SOM-PNN分类器已被用于分割CT树懒数据和20个人类MRI脑体积,从而产生比其他方法更具信息量的3D渲染,具有更多细节和更少的伪影。数值比较表明,SOM-PNN分类器是一种快速、准确、概率化的体绘制分类器。
{"title":"Probabilistic segmentation of volume data for visualization using SOM-PNN classifier","authors":"Feng Ma, Wenping Wang, W. W. Tsang, Zesheng Tang, Shaowei Xia, Xin Tong","doi":"10.1145/288126.288162","DOIUrl":"https://doi.org/10.1145/288126.288162","url":null,"abstract":"We present a new probabilistic classifier, called SOM-PNN classifier, for volume data classification and visualization. The new classifier produces probabilistic classification with Bayesian confidence measure which is highly desirable in volume rendering. Based on the SOM map trained with a large training data set, our SOM-PNN classifier performs the probabilistic classification using the PNN algorithm. This combined use of SOM and PNN overcomes the shortcomings of the parametric methods, the nonparametric methods, and the SOM method. The proposed SOM-PNN classifier has been used to segment the CT sloth data and the 20 human MRI brain volumes resulting in much more informative 3D rendering with more details and less artifacts than other methods. Numerical comparisons demonstrate that the SOM-PNN classifier is a fast, accurate and probabilistic classifier for volume rendering.","PeriodicalId":167141,"journal":{"name":"IEEE Symposium on Volume Visualization (Cat. No.989EX300)","volume":"20 6","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131958006","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 24
Adding shadows to a texture-based volume renderer 在基于纹理的体渲染器中添加阴影
Pub Date : 1998-10-01 DOI: 10.1145/288126.288149
U. Behrens, R. Ratering
Texture based volume rendering is a technique to efficiently visualize volumetric data using texture mapping hardware. We present an algorithm that extends this approach to render shadows for the volume. The algorithm takes advantage of fast frame buffer operations which modern graphics hardware offers, but does not depend on any special purpose hardware. The visual impression of the final image is significantly improved by bringing more structure and three dimensional information into the often foggyish appearance of texture based volume renderings. Although the algorithm does not perform lighting calculations, the resulting image has a shaded appearance, which is a further visual cue to spatial understanding of the data and lets the images appear more realistic. As calculating the shadows is independent of the visualization process it can be applied to any form of volume visualization, though volume rendering based on two- or three-dimensional texture mapping hardware makes the most sense. Compared to unshadowed texture based volume rendering, performance decreases by less than 50%, which is still sufficient to guarantee interactive manipulation of the volume data. In the special case where only the camera is moving with the light position fixed to the scene there is no performance decrease at all, because recalculation has only to be done if the position of the light source with respect to the volume changes.
基于纹理的体绘制是一种利用纹理映射硬件高效可视化体数据的技术。我们提出了一种算法,扩展了这种方法来渲染体积的阴影。该算法利用了现代图形硬件提供的快速帧缓冲操作,但不依赖于任何特殊用途的硬件。通过将更多的结构和三维信息引入到基于纹理的体效果图的模糊外观中,最终图像的视觉印象得到了显着改善。虽然该算法不执行照明计算,但生成的图像具有阴影外观,这是对数据空间理解的进一步视觉提示,并使图像看起来更真实。由于计算阴影是独立于可视化过程的,它可以应用于任何形式的体可视化,尽管基于二维或三维纹理映射硬件的体渲染是最有意义的。与基于无阴影纹理的体绘制相比,性能下降不到50%,仍然足以保证对体数据的交互式操作。在特殊情况下,只有相机移动时,光源的位置固定在场景中,没有任何性能下降,因为只有当光源的位置相对于体积变化时才需要重新计算。
{"title":"Adding shadows to a texture-based volume renderer","authors":"U. Behrens, R. Ratering","doi":"10.1145/288126.288149","DOIUrl":"https://doi.org/10.1145/288126.288149","url":null,"abstract":"Texture based volume rendering is a technique to efficiently visualize volumetric data using texture mapping hardware. We present an algorithm that extends this approach to render shadows for the volume. The algorithm takes advantage of fast frame buffer operations which modern graphics hardware offers, but does not depend on any special purpose hardware. The visual impression of the final image is significantly improved by bringing more structure and three dimensional information into the often foggyish appearance of texture based volume renderings. Although the algorithm does not perform lighting calculations, the resulting image has a shaded appearance, which is a further visual cue to spatial understanding of the data and lets the images appear more realistic. As calculating the shadows is independent of the visualization process it can be applied to any form of volume visualization, though volume rendering based on two- or three-dimensional texture mapping hardware makes the most sense. Compared to unshadowed texture based volume rendering, performance decreases by less than 50%, which is still sufficient to guarantee interactive manipulation of the volume data. In the special case where only the camera is moving with the light position fixed to the scene there is no performance decrease at all, because recalculation has only to be done if the position of the light source with respect to the volume changes.","PeriodicalId":167141,"journal":{"name":"IEEE Symposium on Volume Visualization (Cat. No.989EX300)","volume":"129 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133636597","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 89
Volume animation using the skeleton tree 使用骨架树的体积动画
Pub Date : 1998-10-01 DOI: 10.1145/288126.288152
N. Gagvani, Deepak R. Kenchammana-Hosekote, D. Silver
We describe a technique to animate volumes using a volumetric skeleton. The skeleton is computed from the actual volume, based on a reversible thinning procedure using the distance transform. Polygons are never computed, and the entire process remains in the volume domain. The skeletal points are connected and arranged in a "skeleton tree", which can be used for articulation in an animation program. The full volume object is regrown from the transformed skeletal points. Since the skeleton is an intuitive mechanism for animation, the animator deforms the skeleton and causes corresponding deformations in the volume object. The volumetric skeleton can also be used for volume morphing, automatic path navigation, volume smoothing and compression/decimation.
我们描述了一种使用体积骨架来动画体积的技术。骨架是根据实际体积计算的,基于使用距离变换的可逆细化过程。多边形永远不会被计算,整个过程保持在体积域中。骨骼点连接并排列在“骨骼树”中,可以用于动画程序中的衔接。整个体积对象从转换的骨架点重新生长。由于骨架是一种直观的动画机制,动画师对骨架进行变形,并在体积对象中引起相应的变形。体积骨架也可以用于体积变形,自动路径导航,体积平滑和压缩/抽取。
{"title":"Volume animation using the skeleton tree","authors":"N. Gagvani, Deepak R. Kenchammana-Hosekote, D. Silver","doi":"10.1145/288126.288152","DOIUrl":"https://doi.org/10.1145/288126.288152","url":null,"abstract":"We describe a technique to animate volumes using a volumetric skeleton. The skeleton is computed from the actual volume, based on a reversible thinning procedure using the distance transform. Polygons are never computed, and the entire process remains in the volume domain. The skeletal points are connected and arranged in a \"skeleton tree\", which can be used for articulation in an animation program. The full volume object is regrown from the transformed skeletal points. Since the skeleton is an intuitive mechanism for animation, the animator deforms the skeleton and causes corresponding deformations in the volume object. The volumetric skeleton can also be used for volume morphing, automatic path navigation, volume smoothing and compression/decimation.","PeriodicalId":167141,"journal":{"name":"IEEE Symposium on Volume Visualization (Cat. No.989EX300)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125144162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 51
An exact interactive time visibility ordering algorithm for polyhedral cell complexes 多面体细胞复合体的精确交互时间可视性排序算法
Pub Date : 1998-10-01 DOI: 10.1145/288126.288170
Cláudio T. Silva, Joseph S. B. Mitchell, Peter L. Williams
A visibility ordering of a set of objects, from a given viewpoint, is a total order on the objects such that if object a obstructs object b, then b precedes a in the ordering. Such orderings are extremely useful for rendering volumetric data. The authors present an algorithm that generates a visibility ordering of the cells of an unstructured mesh, provided that the cells are convex polyhedra and nonintersecting, and that the visibility ordering graph does not contain cycles. The overall mesh may be nonconvex and it may have disconnected components. The technique employs the sweep paradigm to determine an ordering between pairs of exterior (mesh boundary) cells which can obstruct one another. It then builds on Williams' (1992) MPVO algorithm which exploits the ordering implied by adjacencies within the mesh. The partial ordering of the exterior cells found by sweeping is used to augment the DAG created in Phase II of the MPVO algorithm. The method thus removes the assumption of the MPVO algorithm that the mesh be convex and connected, and thereby allows one to extend the MPVO algorithm, without using the heuristics that were originally suggested by Williams (and are sometimes problematic). The resulting XMPVO algorithm has been analyzed, and a variation of it has been implemented for unstructured tetrahedral meshes; they provide experimental evidence that it performs very well in practice.
从给定的视点来看,一组对象的可见性排序是对象上的总排序,如果对象A阻碍对象b,则b在排序中先于A。这种排序对于呈现体积数据非常有用。提出了一种生成非结构化网格单元可见性排序的算法,该算法的条件是单元是凸多面体且不相交,并且可见性排序图不包含循环。整体网格可能是非凸的,并且可能具有断开的组件。该技术采用扫描范式来确定一对外部(网格边界)细胞之间的顺序,这些细胞可以相互阻碍。然后,它建立在Williams的(1992)MPVO算法上,该算法利用网格内邻接关系隐含的顺序。通过扫描发现的外部细胞的部分排序用于增加MPVO算法第二阶段创建的DAG。因此,该方法消除了MPVO算法的假设,即网格是凸的和连接的,从而允许人们扩展MPVO算法,而不使用Williams最初提出的启发式(有时是有问题的)。对所得到的XMPVO算法进行了分析,并对非结构化四面体网格进行了改进;他们提供了实验证据,证明该方法在实践中表现良好。
{"title":"An exact interactive time visibility ordering algorithm for polyhedral cell complexes","authors":"Cláudio T. Silva, Joseph S. B. Mitchell, Peter L. Williams","doi":"10.1145/288126.288170","DOIUrl":"https://doi.org/10.1145/288126.288170","url":null,"abstract":"A visibility ordering of a set of objects, from a given viewpoint, is a total order on the objects such that if object a obstructs object b, then b precedes a in the ordering. Such orderings are extremely useful for rendering volumetric data. The authors present an algorithm that generates a visibility ordering of the cells of an unstructured mesh, provided that the cells are convex polyhedra and nonintersecting, and that the visibility ordering graph does not contain cycles. The overall mesh may be nonconvex and it may have disconnected components. The technique employs the sweep paradigm to determine an ordering between pairs of exterior (mesh boundary) cells which can obstruct one another. It then builds on Williams' (1992) MPVO algorithm which exploits the ordering implied by adjacencies within the mesh. The partial ordering of the exterior cells found by sweeping is used to augment the DAG created in Phase II of the MPVO algorithm. The method thus removes the assumption of the MPVO algorithm that the mesh be convex and connected, and thereby allows one to extend the MPVO algorithm, without using the heuristics that were originally suggested by Williams (and are sometimes problematic). The resulting XMPVO algorithm has been analyzed, and a variation of it has been implemented for unstructured tetrahedral meshes; they provide experimental evidence that it performs very well in practice.","PeriodicalId":167141,"journal":{"name":"IEEE Symposium on Volume Visualization (Cat. No.989EX300)","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133784068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 64
Edge preservation in volume rendering using splatting 使用飞溅的体绘制中的边缘保存
Pub Date : 1998-10-01 DOI: 10.1145/288126.288158
Jian Huang, R. Crawfis, D. Stredney
The paper presents a method to preserve sharp edge details in splatting for volume rendering. Conventional splatting algorithms produce fuzzy images for views close to the volume model. The lack of details in such views greatly hinders study and manipulation of data sets using virtual navigation. Our method applies a nonlinear warping to the footprints of conventional splat and builds a table of footprints for different possible edge positions and edge strengths. When rendering, we pick a footprint from the table for each splat, based on the relative position of the voxel to the closest edge. Encouraging results have been achieved both for synthetic data and medical data.
提出了一种在体绘制中保持飞溅边缘细节的方法。传统的飞溅算法对接近体模型的视图产生模糊图像。这种视图缺乏细节,极大地阻碍了使用虚拟导航研究和操作数据集。我们的方法对传统板片的足迹进行非线性翘曲,并为不同可能的边缘位置和边缘强度建立足迹表。渲染时,我们根据体素与最近边缘的相对位置,从表中为每个splat选择一个足迹。在综合数据和医疗数据方面都取得了令人鼓舞的成果。
{"title":"Edge preservation in volume rendering using splatting","authors":"Jian Huang, R. Crawfis, D. Stredney","doi":"10.1145/288126.288158","DOIUrl":"https://doi.org/10.1145/288126.288158","url":null,"abstract":"The paper presents a method to preserve sharp edge details in splatting for volume rendering. Conventional splatting algorithms produce fuzzy images for views close to the volume model. The lack of details in such views greatly hinders study and manipulation of data sets using virtual navigation. Our method applies a nonlinear warping to the footprints of conventional splat and builds a table of footprints for different possible edge positions and edge strengths. When rendering, we pick a footprint from the table for each splat, based on the relative position of the voxel to the closest edge. Encouraging results have been achieved both for synthetic data and medical data.","PeriodicalId":167141,"journal":{"name":"IEEE Symposium on Volume Visualization (Cat. No.989EX300)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125788504","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
3D scan conversion of CSG models into distance volumes CSG模型到距离体的三维扫描转换
Pub Date : 1998-10-01 DOI: 10.1145/288126.288137
D. Breen, S. Mauch, Ross T. Whitaker
A distance volume is a volume dataset where the value stored at each voxel is the shortest distance to the surface of the object being represented by the volume. Distance volumes are a useful representation in a number of computer graphics applications. We present a technique for generating a distance volume with sub-voxel accuracy from one type of geometric model, a constructive solid geometry (CSG) model consisting of superellipsoid primitives. The distance volume is generated in a two step process. The first step calculates the shortest distance to the CSG model at a set of points within a narrow band around the evaluated surface. Additionally, a second set of points, labeled the zero set, which lies on the CSG model's surface are computed. A point in the zero set is associated with each point in the narrow band. Once the narrow band and zero set are calculated, a fast marching method is employed to propagate the shortest distance and closest point information out to the remaining voxels in the volume. Our technique has been used to scan convert a number of CSG models, producing distance volumes which have been utilized in a variety of computer graphics applications, e.g. CSG surface evaluation, offset surface generation, and 3D model morphing.
距离体是一个体数据集,其中存储在每个体素上的值是到该体所表示的对象表面的最短距离。距离体积在许多计算机图形应用中是一种有用的表示。我们提出了一种从一种几何模型生成亚体素精度的距离体的技术,一种由超椭球基元组成的构造立体几何(CSG)模型。生成距离卷的过程分为两步。第一步计算在评估表面周围窄带内的一组点到CSG模型的最短距离。此外,计算位于CSG模型表面的第二组点,标记为零集。零集中的一个点与窄带中的每个点相关联。一旦计算出窄带和零集,采用快速行进方法将最短距离和最近点信息传播到体素中剩余的体素。我们的技术已被用于扫描转换许多CSG模型,产生距离体,这些距离体已用于各种计算机图形应用,例如CSG表面评估,偏移表面生成和3D模型变形。
{"title":"3D scan conversion of CSG models into distance volumes","authors":"D. Breen, S. Mauch, Ross T. Whitaker","doi":"10.1145/288126.288137","DOIUrl":"https://doi.org/10.1145/288126.288137","url":null,"abstract":"A distance volume is a volume dataset where the value stored at each voxel is the shortest distance to the surface of the object being represented by the volume. Distance volumes are a useful representation in a number of computer graphics applications. We present a technique for generating a distance volume with sub-voxel accuracy from one type of geometric model, a constructive solid geometry (CSG) model consisting of superellipsoid primitives. The distance volume is generated in a two step process. The first step calculates the shortest distance to the CSG model at a set of points within a narrow band around the evaluated surface. Additionally, a second set of points, labeled the zero set, which lies on the CSG model's surface are computed. A point in the zero set is associated with each point in the narrow band. Once the narrow band and zero set are calculated, a fast marching method is employed to propagate the shortest distance and closest point information out to the remaining voxels in the volume. Our technique has been used to scan convert a number of CSG models, producing distance volumes which have been utilized in a variety of computer graphics applications, e.g. CSG surface evaluation, offset surface generation, and 3D model morphing.","PeriodicalId":167141,"journal":{"name":"IEEE Symposium on Volume Visualization (Cat. No.989EX300)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1998-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125251563","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 129
期刊
IEEE Symposium on Volume Visualization (Cat. No.989EX300)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1