Photo-realistic representation of anatomical structures for medical education by fusion of volumetric and surface image data

Arthur W. Wetzel, G. L. Nieder, Geri Durka-Pelok, T. Gest, S. Pomerantz, Démian Nave, S. Czanner, Lynn Wagner, Ethan Shirey, D. Deerfield
{"title":"Photo-realistic representation of anatomical structures for medical education by fusion of volumetric and surface image data","authors":"Arthur W. Wetzel, G. L. Nieder, Geri Durka-Pelok, T. Gest, S. Pomerantz, Démian Nave, S. Czanner, Lynn Wagner, Ethan Shirey, D. Deerfield","doi":"10.1109/AIPR.2003.1284261","DOIUrl":null,"url":null,"abstract":"We have produced improved photo-realistic views of anatomical structures for medical education combining data from photographic images of anatomical surfaces with optical, CT and MRI volumetric data such as provided by the NLM Visible Human Project. Volumetric data contains the information needed to construct 3D geometrical models of anatomical structures, but cannot provide a realistic appearance for surfaces. Nieder has captured high quality photographic sequences of anatomy specimens over a range of rotational angles. These have been assembled into QuickTime VR Object movies that can be viewed statically or dynamically. We reuse this surface imagery to produce textures and surface reflectance maps for 3D anatomy models to allow viewing from any orientation and lighting condition. Because the volumetric data comes from different individuals than the surface images, we have to warp these data into alignment. Currently we do not use structured lighting or other direct 3D surface information, so surface shape is recovered from rotational sequences using silhouettes and texture correlations. The results of this work improves the appearance and generality of models, used for anatomy instruction with the PSC Volume Browser.","PeriodicalId":176987,"journal":{"name":"32nd Applied Imagery Pattern Recognition Workshop, 2003. Proceedings.","volume":"20 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2003-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"32nd Applied Imagery Pattern Recognition Workshop, 2003. Proceedings.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/AIPR.2003.1284261","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4

Abstract

We have produced improved photo-realistic views of anatomical structures for medical education combining data from photographic images of anatomical surfaces with optical, CT and MRI volumetric data such as provided by the NLM Visible Human Project. Volumetric data contains the information needed to construct 3D geometrical models of anatomical structures, but cannot provide a realistic appearance for surfaces. Nieder has captured high quality photographic sequences of anatomy specimens over a range of rotational angles. These have been assembled into QuickTime VR Object movies that can be viewed statically or dynamically. We reuse this surface imagery to produce textures and surface reflectance maps for 3D anatomy models to allow viewing from any orientation and lighting condition. Because the volumetric data comes from different individuals than the surface images, we have to warp these data into alignment. Currently we do not use structured lighting or other direct 3D surface information, so surface shape is recovered from rotational sequences using silhouettes and texture correlations. The results of this work improves the appearance and generality of models, used for anatomy instruction with the PSC Volume Browser.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于体积和表面图像数据融合的医学教育解剖结构的逼真表现
我们将解剖学表面的摄影图像数据与光学、CT和MRI体积数据(如NLM可见人体项目提供的数据)相结合,为医学教育制作了改进的逼真的解剖结构视图。体积数据包含构建解剖结构的三维几何模型所需的信息,但不能提供表面的真实外观。尼德在一系列旋转角度上拍摄了高质量的解剖标本摄影序列。这些已经组装成QuickTime VR对象电影,可以静态或动态地查看。我们重用这些表面图像来为3D解剖模型生成纹理和表面反射率图,以允许从任何方向和光照条件下观看。因为体积数据来自不同的个体,而不是表面图像,我们必须扭曲这些数据,使其对齐。目前,我们不使用结构化照明或其他直接的3D表面信息,所以表面形状是从使用轮廓和纹理关联的旋转序列中恢复的。这项工作的结果改善了模型的外观和通用性,用于解剖学教学与PSC卷浏览器。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Quantum image processing (QuIP) Dual band (MWIR/LWIR) hyperspectral imager Fusion techniques for automatic target recognition Perspectives on the fusion of image and non-image data Eigenviews for object recognition in multispectral imaging systems
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1