首页 > 最新文献

IEEE International Conference on Automation Science and Engineering (CASE) : [proceedings]. IEEE Conference on Automation Science and Engineering最新文献

英文 中文
Deep Learning-Enhanced Robotic Subretinal Injection with Real-Time Retinal Motion Compensation. 基于实时视网膜运动补偿的深度学习增强机器人视网膜下注射。
Tianle Wu, Mojtaba Esfandiari, Peiyao Zhang, Russell H Taylor, Peter Gehlbach, Iulian Iordachita

Subretinal injection is a critical procedure for delivering therapeutic agents to treat retinal diseases such as inherited retinal diseases (IRD) and age-related macular degeneration (AMD). However, retinal motion caused by physiological factors such as respiration and heartbeat significantly impacts precise needle positioning, increasing the risk of retinal pigment epithelium (RPE) damage. This paper presents a fully autonomous robotic subretinal injection system that integrates intraoperative optical coherence tomography (iOCT) imaging and deep learning-based motion prediction to synchronize needle and retinal motion. A Long Short-Term Memory (LSTM) neural network is used to predict internal limiting membrane (ILM) motion, outperforming a Fast Fourier Transform (FFT)-based baseline model. Additionally, a real-time registration framework aligns the needle tip position with the robot's coordinate frame. Then, a dynamic proportional speed control strategy ensures smooth and adaptive needle insertion. Experimental validation in both simulation and ex vivo open-sky porcine eyes demonstrates precise motion synchronization and successful subretinal injections. The experiments achieve a mean tracking error below 16.4 μm in pre-insertion phases. These results show the potential of AI-driven robotic assistance to improve the safety and accuracy of retinal microsurgery.

视网膜下注射是治疗遗传性视网膜疾病(IRD)和老年性黄斑变性(AMD)等视网膜疾病的重要手段。然而,呼吸、心跳等生理因素引起的视网膜运动明显影响针的精确定位,增加了视网膜色素上皮(RPE)损伤的风险。本文介绍了一种完全自主的机器人视网膜下注射系统,该系统集成了术中光学相干断层扫描(iOCT)成像和基于深度学习的运动预测,以同步针头和视网膜运动。长短期记忆(LSTM)神经网络用于预测内极限膜(ILM)运动,优于基于快速傅立叶变换(FFT)的基线模型。此外,实时配准框架将针尖位置与机器人的坐标框架对齐。然后,采用动态比例速度控制策略,保证插针平稳、自适应。在模拟和离体露天猪眼的实验验证证明了精确的运动同步和成功的视网膜下注射。在插入前阶段,平均跟踪误差小于16.4 μm。这些结果显示了人工智能驱动的机器人辅助在提高视网膜显微手术的安全性和准确性方面的潜力。
{"title":"Deep Learning-Enhanced Robotic Subretinal Injection with Real-Time Retinal Motion Compensation.","authors":"Tianle Wu, Mojtaba Esfandiari, Peiyao Zhang, Russell H Taylor, Peter Gehlbach, Iulian Iordachita","doi":"10.1109/case58245.2025.11163942","DOIUrl":"10.1109/case58245.2025.11163942","url":null,"abstract":"<p><p>Subretinal injection is a critical procedure for delivering therapeutic agents to treat retinal diseases such as inherited retinal diseases (IRD) and age-related macular degeneration (AMD). However, retinal motion caused by physiological factors such as respiration and heartbeat significantly impacts precise needle positioning, increasing the risk of retinal pigment epithelium (RPE) damage. This paper presents a fully autonomous robotic subretinal injection system that integrates intraoperative optical coherence tomography (iOCT) imaging and deep learning-based motion prediction to synchronize needle and retinal motion. A Long Short-Term Memory (LSTM) neural network is used to predict internal limiting membrane (ILM) motion, outperforming a Fast Fourier Transform (FFT)-based baseline model. Additionally, a real-time registration framework aligns the needle tip position with the robot's coordinate frame. Then, a dynamic proportional speed control strategy ensures smooth and adaptive needle insertion. Experimental validation in both simulation and <i>ex vivo</i> open-sky porcine eyes demonstrates precise motion synchronization and successful subretinal injections. The experiments achieve a mean tracking error below 16.4 <i>μ</i>m in pre-insertion phases. These results show the potential of AI-driven robotic assistance to improve the safety and accuracy of retinal microsurgery.</p>","PeriodicalId":90520,"journal":{"name":"IEEE International Conference on Automation Science and Engineering (CASE) : [proceedings]. IEEE Conference on Automation Science and Engineering","volume":"2025 ","pages":"1285-1291"},"PeriodicalIF":0.0,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12459653/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145152294","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fusing Tool Segmentation Predictions from Pose-Informed Morphological Polar Transform of Endoscopic Images. 基于姿态信息的内镜图像形态极变换的融合工具分割预测。
Xiaoyi Wu, Dina Sehnawi, Yicheng Zhu, Yangming Lee, Kevin Huang

This paper presents and evaluates methods of fusing semantic image segmentation predictions, and highlights a novel hybrid approach that combines spatial frequency and edge features. Tool-labeled endoscopy from sinus surgery served as the image dataset, while two methods of surgical tool segmentation via morphological polar transform provided distinct predictions. The morphological transform acted as an input pre-processing step prior to segmentation via the U-Net architecture. Two separate predictions were available for each image based on the transformation center: one at the surgical tool-tip (TT) and one at the surgical tool vanishing point (VP). The goal in this work was to systematically generate a superior segmentation by fusing information from the two aforementioned predictions. Improved segmentation performance in this domain is envisioned to enable vision-based force estimation in robot-assisted minimally invasive surgery (RMIS), where lack of reliable force and tactile feedback has continued to be an ongoing challenge. While methods for deep learning based segmentation fusion exist, such methods require extensive datasets and potentially obfuscate explainability. Thus, three approaches relying solely on low-level features to fuse grayscale segmentation predictions were proposed in this work: (1) gradient estimation, (2) Laplacian pyramid and (3) a modified spatial frequency method. The latter two demonstrated enhanced segmentation compared to original predictions. This work also explores explainability towards identifying candidate prediction pairs for fusion via unsupervised clustering as well as a ResNet-18 model. Cursory investigations into properties of the fused predictions provide insight into the potential use of the proposed methods in domains other than surgical tool segmentation.

本文提出并评价了融合语义图像分割预测的方法,重点介绍了一种结合空间频率和边缘特征的新型混合方法。来自鼻窦手术的工具标记内窥镜作为图像数据集,而两种通过形态极性变换的手术工具分割方法提供了不同的预测。形态变换作为输入预处理步骤之前,通过U-Net架构分割。基于转换中心,每个图像可获得两个独立的预测:一个在手术工具尖端(TT),一个在手术工具消失点(VP)。这项工作的目标是通过融合上述两种预测的信息,系统地生成优越的分割。该领域改进的分割性能被设想为在机器人辅助微创手术(RMIS)中实现基于视觉的力估计,其中缺乏可靠的力和触觉反馈一直是一个持续的挑战。虽然存在基于深度学习的分割融合方法,但这些方法需要大量的数据集,并且可能会混淆可解释性。因此,本文提出了三种完全依赖底层特征融合灰度分割预测的方法:(1)梯度估计,(2)拉普拉斯金字塔和(3)改进的空间频率方法。与原始预测相比,后两者显示出增强的分割。这项工作还探讨了通过无监督聚类和ResNet-18模型识别融合候选预测对的可解释性。对融合预测的属性进行粗略的调查,可以深入了解所提出的方法在手术工具分割以外的领域的潜在用途。
{"title":"Fusing Tool Segmentation Predictions from Pose-Informed Morphological Polar Transform of Endoscopic Images.","authors":"Xiaoyi Wu, Dina Sehnawi, Yicheng Zhu, Yangming Lee, Kevin Huang","doi":"10.1109/case58245.2025.11164078","DOIUrl":"10.1109/case58245.2025.11164078","url":null,"abstract":"<p><p>This paper presents and evaluates methods of fusing semantic image segmentation predictions, and highlights a novel hybrid approach that combines spatial frequency and edge features. Tool-labeled endoscopy from sinus surgery served as the image dataset, while two methods of surgical tool segmentation via morphological polar transform provided distinct predictions. The morphological transform acted as an input pre-processing step prior to segmentation via the U-Net architecture. Two separate predictions were available for each image based on the transformation center: one at the surgical tool-tip (TT) and one at the surgical tool vanishing point (VP). The goal in this work was to systematically generate a superior segmentation by fusing information from the two aforementioned predictions. Improved segmentation performance in this domain is envisioned to enable vision-based force estimation in robot-assisted minimally invasive surgery (RMIS), where lack of reliable force and tactile feedback has continued to be an ongoing challenge. While methods for deep learning based segmentation fusion exist, such methods require extensive datasets and potentially obfuscate explainability. Thus, three approaches relying solely on low-level features to fuse grayscale segmentation predictions were proposed in this work: (1) gradient estimation, (2) Laplacian pyramid and (3) a modified spatial frequency method. The latter two demonstrated enhanced segmentation compared to original predictions. This work also explores explainability towards identifying candidate prediction pairs for fusion via unsupervised clustering as well as a ResNet-18 model. Cursory investigations into properties of the fused predictions provide insight into the potential use of the proposed methods in domains other than surgical tool segmentation.</p>","PeriodicalId":90520,"journal":{"name":"IEEE International Conference on Automation Science and Engineering (CASE) : [proceedings]. IEEE Conference on Automation Science and Engineering","volume":"2025 ","pages":"1316-1322"},"PeriodicalIF":0.0,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12743423/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145851676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Particle Filter Based Active Localization of Target and Needle in Robotic Image-Guided Intervention Systems. 基于粒子滤波的机器人图像引导干预系统中目标和指针的主动定位。
Mark Renfrew, Zhuofu Bai, M Cenk Cavuşoğlu

This paper presents a probabilistic method for active localization of needle and targets in robotic image guided interventions. Specifically, an active localization scenario where the system directly controls the imaging system to actively localize the needle and target locations using intra-operative medical imaging (e.g., computerized tomography and ultrasound imaging) is explored. In the proposed method, the active localization problem is posed as an information maximization problem, where the beliefs for the needle and target states are represented and estimated using particle filters. The proposed method is also validated using a simulation study.

提出了一种机器人图像引导干预中针和目标主动定位的概率方法。具体而言,探索了一种主动定位场景,即系统直接控制成像系统,利用术中医学成像(如计算机断层扫描和超声成像)主动定位针头和目标位置。在该方法中,将主动定位问题转化为信息最大化问题,利用粒子滤波器表示和估计针和目标状态的信念。通过仿真研究验证了该方法的有效性。
{"title":"Particle Filter Based Active Localization of Target and Needle in Robotic Image-Guided Intervention Systems.","authors":"Mark Renfrew,&nbsp;Zhuofu Bai,&nbsp;M Cenk Cavuşoğlu","doi":"10.1109/CoASE.2013.6653938","DOIUrl":"https://doi.org/10.1109/CoASE.2013.6653938","url":null,"abstract":"<p><p>This paper presents a probabilistic method for active localization of needle and targets in robotic image guided interventions. Specifically, an active localization scenario where the system directly controls the imaging system to actively localize the needle and target locations using intra-operative medical imaging (e.g., computerized tomography and ultrasound imaging) is explored. In the proposed method, the active localization problem is posed as an information maximization problem, where the beliefs for the needle and target states are represented and estimated using particle filters. The proposed method is also validated using a simulation study.</p>","PeriodicalId":90520,"journal":{"name":"IEEE International Conference on Automation Science and Engineering (CASE) : [proceedings]. IEEE Conference on Automation Science and Engineering","volume":"2013 ","pages":"448-454"},"PeriodicalIF":0.0,"publicationDate":"2013-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/CoASE.2013.6653938","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"32804376","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
期刊
IEEE International Conference on Automation Science and Engineering (CASE) : [proceedings]. IEEE Conference on Automation Science and Engineering
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1