Monocular Microscope to CT Registration using Pose Estimation of the Incus for Augmented Reality Cochlear Implant Surgery

ArXiv Pub Date : 2024-03-12 DOI:10.1117/12.3008830
Yike Zhang, Eduardo Davalos, Dingjie Su, Ange Lou, Jack H. Noble
{"title":"Monocular Microscope to CT Registration using Pose Estimation of the Incus for Augmented Reality Cochlear Implant Surgery","authors":"Yike Zhang, Eduardo Davalos, Dingjie Su, Ange Lou, Jack H. Noble","doi":"10.1117/12.3008830","DOIUrl":null,"url":null,"abstract":"For those experiencing severe-to-profound sensorineural hearing loss, the cochlear implant (CI) is the preferred treatment. Augmented reality (AR) aided surgery can potentially improve CI procedures and hearing outcomes. Typically, AR solutions for image-guided surgery rely on optical tracking systems to register pre-operative planning information to the display so that hidden anatomy or other important information can be overlayed and co-registered with the view of the surgical scene. In this paper, our goal is to develop a method that permits direct 2D-to-3D registration of the microscope video to the pre-operative Computed Tomography (CT) scan without the need for external tracking equipment. Our proposed solution involves using surface mapping of a portion of the incus in surgical recordings and determining the pose of this structure relative to the surgical microscope by performing pose estimation via the perspective-n-point (PnP) algorithm. This registration can then be applied to pre-operative segmentations of other anatomy-of-interest, as well as the planned electrode insertion trajectory to co-register this information for the AR display. Our results demonstrate the accuracy with an average rotation error of less than 25 degrees and a translation error of less than 2 mm, 3 mm, and 0.55% for the x, y, and z axes, respectively. Our proposed method has the potential to be applicable and generalized to other surgical procedures while only needing a monocular microscope during intra-operation.","PeriodicalId":513202,"journal":{"name":"ArXiv","volume":"53 2","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ArXiv","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1117/12.3008830","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

For those experiencing severe-to-profound sensorineural hearing loss, the cochlear implant (CI) is the preferred treatment. Augmented reality (AR) aided surgery can potentially improve CI procedures and hearing outcomes. Typically, AR solutions for image-guided surgery rely on optical tracking systems to register pre-operative planning information to the display so that hidden anatomy or other important information can be overlayed and co-registered with the view of the surgical scene. In this paper, our goal is to develop a method that permits direct 2D-to-3D registration of the microscope video to the pre-operative Computed Tomography (CT) scan without the need for external tracking equipment. Our proposed solution involves using surface mapping of a portion of the incus in surgical recordings and determining the pose of this structure relative to the surgical microscope by performing pose estimation via the perspective-n-point (PnP) algorithm. This registration can then be applied to pre-operative segmentations of other anatomy-of-interest, as well as the planned electrode insertion trajectory to co-register this information for the AR display. Our results demonstrate the accuracy with an average rotation error of less than 25 degrees and a translation error of less than 2 mm, 3 mm, and 0.55% for the x, y, and z axes, respectively. Our proposed method has the potential to be applicable and generalized to other surgical procedures while only needing a monocular microscope during intra-operation.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
利用对 Incus 的姿势估计进行单目显微镜到 CT 的注册,用于增强现实人工耳蜗植入手术
对于重度到永久性感音神经性听力损失患者来说,人工耳蜗(CI)是首选的治疗方法。增强现实(AR)辅助手术有可能改善人工耳蜗手术和听力效果。通常情况下,用于图像引导手术的 AR 解决方案依赖光学跟踪系统将术前规划信息注册到显示屏上,以便将隐藏的解剖结构或其他重要信息与手术场景的视图叠加并共同注册。在本文中,我们的目标是开发一种方法,允许显微镜视频与术前计算机断层扫描 (CT) 扫描进行二维到三维的直接配准,而无需外部跟踪设备。我们提出的解决方案包括使用手术记录中切口部分的表面映射,并通过透视-点(PnP)算法进行姿态估计,确定该结构相对于手术显微镜的姿态。然后,可将此配准应用于其他感兴趣解剖结构的术前分割以及计划的电极插入轨迹,以便为 AR 显示共同配准这些信息。我们的研究结果证明了这一方法的准确性,其 x、y 和 z 轴的平均旋转误差小于 25 度,平移误差分别小于 2 毫米、3 毫米和 0.55%。我们提出的方法有可能适用于并推广到其他外科手术中,同时在术中只需要一个单目显微镜。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Combining Transformer based Deep Reinforcement Learning with Black-Litterman Model for Portfolio Optimization TinyGC-Net: An Extremely Tiny Network for Calibrating MEMS Gyroscopes Short-Term Solar Irradiance Forecasting Under Data Transmission Constraints F2Depth: Self-supervised Indoor Monocular Depth Estimation via Optical Flow Consistency and Feature Map Synthesis Efficient Constrained k-Center Clustering with Background Knowledge
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1