首页 > 最新文献

2009 IEEE Pacific Visualization Symposium最新文献

英文 中文
Contextual picking of volumetric structures 体积结构的语境选择
Pub Date : 2009-04-20 DOI: 10.1109/PACIFICVIS.2009.4906855
P. Kohlmann, S. Bruckner, A. Kanitsar, E. Gröller
This paper presents a novel method for the interactive identification of contextual interest points within volumetric data by picking on a direct volume rendered image. In clinical diagnostics the points of interest are often located in the center of anatomical structures. In order to derive the volumetric position which allows a convenient examination of the intended structure, the system automatically extracts contextual meta information from the DICOM (Digital Imaging and Communications in Medicine) images and the setup of the medical workstation. Along a viewing ray for a volumetric picking, the ray profile is analyzed for structures which are similar to predefined templates from a knowledge base. We demonstrate with our results that the obtained position in 3D can be utilized to highlight a structure in 2D slice views, to interactively calculate centerlines of tubular objects, or to place labels at contextually-defined volumetric positions.
本文提出了一种通过选取直接体渲染图像来交互式识别体数据中上下文兴趣点的新方法。在临床诊断中,兴趣点通常位于解剖结构的中心。为了获得能够方便检查预期结构的体积位置,系统自动从DICOM(医学数字成像和通信)图像和医疗工作站的设置中提取上下文元信息。沿着观察射线进行体积拾取,分析与知识库中预定义模板相似的结构的射线剖面。我们用我们的结果证明,在3D中获得的位置可以用来突出显示二维切片视图中的结构,交互式地计算管状物体的中心线,或者在上下文定义的体积位置放置标签。
{"title":"Contextual picking of volumetric structures","authors":"P. Kohlmann, S. Bruckner, A. Kanitsar, E. Gröller","doi":"10.1109/PACIFICVIS.2009.4906855","DOIUrl":"https://doi.org/10.1109/PACIFICVIS.2009.4906855","url":null,"abstract":"This paper presents a novel method for the interactive identification of contextual interest points within volumetric data by picking on a direct volume rendered image. In clinical diagnostics the points of interest are often located in the center of anatomical structures. In order to derive the volumetric position which allows a convenient examination of the intended structure, the system automatically extracts contextual meta information from the DICOM (Digital Imaging and Communications in Medicine) images and the setup of the medical workstation. Along a viewing ray for a volumetric picking, the ray profile is analyzed for structures which are similar to predefined templates from a knowledge base. We demonstrate with our results that the obtained position in 3D can be utilized to highlight a structure in 2D slice views, to interactively calculate centerlines of tubular objects, or to place labels at contextually-defined volumetric positions.","PeriodicalId":133992,"journal":{"name":"2009 IEEE Pacific Visualization Symposium","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-04-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122801268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 26
Structure-aware viewpoint selection for volume visualization 体可视化的结构感知视点选择
Pub Date : 2009-04-20 DOI: 10.1109/PACIFICVIS.2009.4906856
Y. Tao, Hai Lin, H. Bao, F. Dong, G. Clapworthy
Viewpoint selection is becoming a useful part in the volume visualization pipeline, as it further improves the efficiency of data understanding by providing representative viewpoints. We present two structure-aware view descriptors, which are the shape view descriptor and the detail view descriptor, to select the optimal viewpoint with the maximum amount of the structural information. These two proposed structure-aware view descriptors are both based on the gradient direction, as the gradient is a well-defined measurement of boundary structures, which have been proved as features of interest in many applications. The shape view descriptor is designed to evaluate the overall orientation of features of interest. For estimating local details, we employ the bilateral filter to construct the shape volume. The bilateral filter is very effective in smoothing local details and preserving strong boundary structures at the same time. Therefore, large-scale global structures are in the shape volume, while small-scale local details still remain in the original volume. The detail view descriptor measures the amount of visible details on boundary structures in terms of variances in the local structure between the shape volume and the original volume. These two view descriptors can be integrated into a viewpoint selection framework, and this framework can emphasize global structures or local details with flexibility tailored to the user's specific situations. We performed experiments on various types of volume datasets. These experiments verify the effectiveness of our proposed view descriptors, and the proposed viewpoint selection framework actually locates the optimal viewpoints that show the maximum amount of the structural information.
视点选择正在成为体可视化管道中的一个有用部分,因为它通过提供代表性视点进一步提高了数据理解的效率。提出了形状视图描述符和细节视图描述符两种结构感知视图描述符,以选择结构信息量最大的最优视点。这两种提出的结构感知视图描述符都基于梯度方向,因为梯度是边界结构的定义良好的度量,在许多应用中已经被证明是感兴趣的特征。形状视图描述符设计用于评估感兴趣的特征的总体方向。为了估计局部细节,我们使用双边滤波器来构造形状体积。双边滤波器在平滑局部细节的同时保留了强边界结构。因此,大尺度的全局结构在形状体中,而小尺度的局部细节仍保留在原体中。细节视图描述符根据形状体积和原始体积之间局部结构的差异来测量边界结构上可见细节的数量。这两个视图描述符可以集成到一个视点选择框架中,该框架可以强调全局结构或局部细节,并根据用户的具体情况灵活地进行调整。我们在不同类型的体积数据集上进行了实验。这些实验验证了我们所提出的视图描述符的有效性,并且所提出的视点选择框架实际上定位了显示最多结构信息的最优视点。
{"title":"Structure-aware viewpoint selection for volume visualization","authors":"Y. Tao, Hai Lin, H. Bao, F. Dong, G. Clapworthy","doi":"10.1109/PACIFICVIS.2009.4906856","DOIUrl":"https://doi.org/10.1109/PACIFICVIS.2009.4906856","url":null,"abstract":"Viewpoint selection is becoming a useful part in the volume visualization pipeline, as it further improves the efficiency of data understanding by providing representative viewpoints. We present two structure-aware view descriptors, which are the shape view descriptor and the detail view descriptor, to select the optimal viewpoint with the maximum amount of the structural information. These two proposed structure-aware view descriptors are both based on the gradient direction, as the gradient is a well-defined measurement of boundary structures, which have been proved as features of interest in many applications. The shape view descriptor is designed to evaluate the overall orientation of features of interest. For estimating local details, we employ the bilateral filter to construct the shape volume. The bilateral filter is very effective in smoothing local details and preserving strong boundary structures at the same time. Therefore, large-scale global structures are in the shape volume, while small-scale local details still remain in the original volume. The detail view descriptor measures the amount of visible details on boundary structures in terms of variances in the local structure between the shape volume and the original volume. These two view descriptors can be integrated into a viewpoint selection framework, and this framework can emphasize global structures or local details with flexibility tailored to the user's specific situations. We performed experiments on various types of volume datasets. These experiments verify the effectiveness of our proposed view descriptors, and the proposed viewpoint selection framework actually locates the optimal viewpoints that show the maximum amount of the structural information.","PeriodicalId":133992,"journal":{"name":"2009 IEEE Pacific Visualization Symposium","volume":"95 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-04-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127995413","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 24
Interactive feature extraction and tracking by utilizing region coherency 利用区域相干性的交互式特征提取与跟踪
Pub Date : 2009-04-20 DOI: 10.1109/PACIFICVIS.2009.4906833
C. Muelder, K. Ma
The ability to extract and follow time-varying flow features in volume data generated from large-scale numerical simulations enables scientists to effectively see and validate modeled phenomena and processes. Extracted features often take much less storage space and computing resources to visualize. Most feature extraction and tracking methods first identify features of interest in each time step independently, then correspond these features in consecutive time steps of the data. Since these methods handle each time step separately, they do not use the coherency of the feature along the time dimension in the extraction process. In this paper, we present a prediction-correction method that uses a prediction step to make the best guess of the feature region in the subsequent time step, followed by growing and shrinking the border of the predicted region to coherently extract the actual feature of interest. This method makes use of the temporal-space coherency of the data to accelerate the extraction process while implicitly solving the tedious correspondence problem that previous methods focus on. Our method is low cost with very little storage overhead, and thus facilitates interactive or runtime extraction and visualization, unlike previous methods which were largely suited for batch-mode processing due to high computational cost.
从大规模数值模拟生成的体积数据中提取和跟踪随时间变化的流动特征的能力,使科学家能够有效地观察和验证模型现象和过程。提取的特征通常需要更少的存储空间和计算资源来可视化。大多数特征提取和跟踪方法首先在每个时间步长中独立识别感兴趣的特征,然后在数据的连续时间步长中对应这些特征。由于这些方法分别处理每个时间步长,因此在提取过程中没有利用特征沿时间维度的一致性。在本文中,我们提出了一种预测-校正方法,该方法利用预测步长在随后的时间步长中对特征区域进行最佳猜测,然后对预测区域的边界进行增长和缩小,以相干地提取实际感兴趣的特征。该方法利用数据的时空一致性加快了提取过程,同时隐含地解决了以往方法所关注的繁琐的对应问题。我们的方法成本低,存储开销很小,因此便于交互式或运行时提取和可视化,而不像以前的方法由于计算成本高而主要适用于批处理模式。
{"title":"Interactive feature extraction and tracking by utilizing region coherency","authors":"C. Muelder, K. Ma","doi":"10.1109/PACIFICVIS.2009.4906833","DOIUrl":"https://doi.org/10.1109/PACIFICVIS.2009.4906833","url":null,"abstract":"The ability to extract and follow time-varying flow features in volume data generated from large-scale numerical simulations enables scientists to effectively see and validate modeled phenomena and processes. Extracted features often take much less storage space and computing resources to visualize. Most feature extraction and tracking methods first identify features of interest in each time step independently, then correspond these features in consecutive time steps of the data. Since these methods handle each time step separately, they do not use the coherency of the feature along the time dimension in the extraction process. In this paper, we present a prediction-correction method that uses a prediction step to make the best guess of the feature region in the subsequent time step, followed by growing and shrinking the border of the predicted region to coherently extract the actual feature of interest. This method makes use of the temporal-space coherency of the data to accelerate the extraction process while implicitly solving the tedious correspondence problem that previous methods focus on. Our method is low cost with very little storage overhead, and thus facilitates interactive or runtime extraction and visualization, unlike previous methods which were largely suited for batch-mode processing due to high computational cost.","PeriodicalId":133992,"journal":{"name":"2009 IEEE Pacific Visualization Symposium","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-04-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116856826","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 57
Moment curves 力矩曲线
Pub Date : 2009-04-20 DOI: 10.1109/PACIFICVIS.2009.4906857
Daniel Patel, M. Haidacher, Jean-Paul Balabanian, E. Gröller
We define a transfer function based on the first and second statistical moments. We consider the evolution of the mean and variance with respect to a growing neighborhood around a voxel. This evolution defines a curve in 3D for which we identify important trends and project it back to 2D. The resulting 2D projection can be brushed for easy and robust classification of materials and material borders. The transfer function is applied to both CT and MR data.
我们定义了基于第一和第二统计矩的传递函数。我们考虑相对于一个体素周围不断增长的邻域的均值和方差的演化。这种演变在3D中定义了一条曲线,我们从中识别出重要的趋势,并将其投影回2D。生成的2D投影可以刷刷,以便对材料和材料边界进行简单而稳健的分类。传递函数应用于CT和MR数据。
{"title":"Moment curves","authors":"Daniel Patel, M. Haidacher, Jean-Paul Balabanian, E. Gröller","doi":"10.1109/PACIFICVIS.2009.4906857","DOIUrl":"https://doi.org/10.1109/PACIFICVIS.2009.4906857","url":null,"abstract":"We define a transfer function based on the first and second statistical moments. We consider the evolution of the mean and variance with respect to a growing neighborhood around a voxel. This evolution defines a curve in 3D for which we identify important trends and project it back to 2D. The resulting 2D projection can be brushed for easy and robust classification of materials and material borders. The transfer function is applied to both CT and MR data.","PeriodicalId":133992,"journal":{"name":"2009 IEEE Pacific Visualization Symposium","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-04-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116899429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 38
期刊
2009 IEEE Pacific Visualization Symposium
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1