{"title":"GPU加速等面体渲染使用基于深度的一致性","authors":"C. Braley, R. Hagan, Yong Cao, D. Gračanin","doi":"10.1145/1666778.1666820","DOIUrl":null,"url":null,"abstract":"With large scientific and medical datasets, visualization tools have trouble maintaining a high enough frame-rate to remain interactive. In this paper, we present a novel GPU based system that permits visualization of isosurfaces in large data sets in real time. In particular, we present a novel use of a depth buffer to speed up the operation of rotating around a volume data set. As the user rotates the viewpoint around the 3D volume data, there is much coherence between depth buffers from two sequential renderings. We utilize this coherence in our novel <i>prediction buffer</i> approach, and achieve a marked increase in speed during rotation. The authors of [Klein et al. 2005] used a depth buffer based approach, but they did not alter their traversal based on the prediction value. Our prediction buffer is a 2D array in which we store a single floating point value for each pixel. If a particular pixel <i>p</i><sub><i>ij</i></sub> has some positive depth value <i>d</i><sub><i>ij</i></sub>, this indicates that the ray <i>R</i><sub><i>ij</i></sub>, which was cast through <i>p</i><sub><i>ij</i></sub> on the previous render, intersected an isosurface at depth <i>d</i><sub><i>ij</i></sub>. The prediction buffer also handles three special cases. When the ray <i>R</i><sub><i>ij</i></sub> misses the isosurface, but hits the bounding box containing the volume data, we store a negative flag value, <i>d</i><sub><i>hitBoxMissSurf</i></sub> in <i>p</i><sub><i>ij</i></sub>. When <i>R</i><sub><i>ij</i></sub> misses the bounding box, we store the value <i>d</i><sub><i>missBox</i></sub>. Lastly, when we have no prediction stored in the buffer, we store the value <i>d</i><sub><i>noInfo</i></sub>.","PeriodicalId":180587,"journal":{"name":"ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia","volume":"11 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2009-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"GPU accelerated isosurface volume rendering using depth-based coherence\",\"authors\":\"C. Braley, R. Hagan, Yong Cao, D. Gračanin\",\"doi\":\"10.1145/1666778.1666820\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"With large scientific and medical datasets, visualization tools have trouble maintaining a high enough frame-rate to remain interactive. In this paper, we present a novel GPU based system that permits visualization of isosurfaces in large data sets in real time. In particular, we present a novel use of a depth buffer to speed up the operation of rotating around a volume data set. As the user rotates the viewpoint around the 3D volume data, there is much coherence between depth buffers from two sequential renderings. We utilize this coherence in our novel <i>prediction buffer</i> approach, and achieve a marked increase in speed during rotation. The authors of [Klein et al. 2005] used a depth buffer based approach, but they did not alter their traversal based on the prediction value. Our prediction buffer is a 2D array in which we store a single floating point value for each pixel. If a particular pixel <i>p</i><sub><i>ij</i></sub> has some positive depth value <i>d</i><sub><i>ij</i></sub>, this indicates that the ray <i>R</i><sub><i>ij</i></sub>, which was cast through <i>p</i><sub><i>ij</i></sub> on the previous render, intersected an isosurface at depth <i>d</i><sub><i>ij</i></sub>. The prediction buffer also handles three special cases. When the ray <i>R</i><sub><i>ij</i></sub> misses the isosurface, but hits the bounding box containing the volume data, we store a negative flag value, <i>d</i><sub><i>hitBoxMissSurf</i></sub> in <i>p</i><sub><i>ij</i></sub>. When <i>R</i><sub><i>ij</i></sub> misses the bounding box, we store the value <i>d</i><sub><i>missBox</i></sub>. Lastly, when we have no prediction stored in the buffer, we store the value <i>d</i><sub><i>noInfo</i></sub>.\",\"PeriodicalId\":180587,\"journal\":{\"name\":\"ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia\",\"volume\":\"11 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2009-12-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/1666778.1666820\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/1666778.1666820","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3
摘要
对于大型科学和医疗数据集,可视化工具在保持足够高的帧率以保持交互性方面存在困难。在本文中,我们提出了一个新的基于GPU的系统,允许在大数据集中实时可视化等值面。特别是,我们提出了一种新的深度缓冲区的使用,以加快围绕体积数据集旋转的操作。当用户围绕3D体数据旋转视点时,两个连续渲染的深度缓冲之间有很多一致性。我们在新的预测缓冲方法中利用了这种一致性,并在旋转过程中实现了速度的显着提高。[Klein et al. 2005]的作者使用了基于深度缓冲的方法,但他们没有根据预测值改变遍历。我们的预测缓冲区是一个二维数组,我们在其中存储每个像素的单个浮点值。如果一个特定的像素pij有一些正的深度值dij,这表明在之前的渲染中通过pij投射的光线Rij在深度dij处与等值面相交。预测缓冲区还处理三种特殊情况。当射线Rij错过等值面,但击中包含体积数据的边界框时,我们在pij中存储一个负标志值dhitBoxMissSurf。当Rij错过边界框时,我们存储值dmissBox。最后,当我们没有在缓冲区中存储预测时,我们存储值dnoInfo。
GPU accelerated isosurface volume rendering using depth-based coherence
With large scientific and medical datasets, visualization tools have trouble maintaining a high enough frame-rate to remain interactive. In this paper, we present a novel GPU based system that permits visualization of isosurfaces in large data sets in real time. In particular, we present a novel use of a depth buffer to speed up the operation of rotating around a volume data set. As the user rotates the viewpoint around the 3D volume data, there is much coherence between depth buffers from two sequential renderings. We utilize this coherence in our novel prediction buffer approach, and achieve a marked increase in speed during rotation. The authors of [Klein et al. 2005] used a depth buffer based approach, but they did not alter their traversal based on the prediction value. Our prediction buffer is a 2D array in which we store a single floating point value for each pixel. If a particular pixel pij has some positive depth value dij, this indicates that the ray Rij, which was cast through pij on the previous render, intersected an isosurface at depth dij. The prediction buffer also handles three special cases. When the ray Rij misses the isosurface, but hits the bounding box containing the volume data, we store a negative flag value, dhitBoxMissSurf in pij. When Rij misses the bounding box, we store the value dmissBox. Lastly, when we have no prediction stored in the buffer, we store the value dnoInfo.