Guiding the Last Centimeter: Novel Anatomy-Aware Probe Servoing for Standardized Imaging Plane Navigation in Robotic Lung Ultrasound

IF 6.4 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS IEEE Transactions on Automation Science and Engineering Pub Date : 2024-09-10 DOI:10.1109/TASE.2024.3448241
Xihan Ma;Mingjie Zeng;Jeffrey C. Hill;Beatrice Hoffmann;Ziming Zhang;Haichong K. Zhang
{"title":"Guiding the Last Centimeter: Novel Anatomy-Aware Probe Servoing for Standardized Imaging Plane Navigation in Robotic Lung Ultrasound","authors":"Xihan Ma;Mingjie Zeng;Jeffrey C. Hill;Beatrice Hoffmann;Ziming Zhang;Haichong K. Zhang","doi":"10.1109/TASE.2024.3448241","DOIUrl":null,"url":null,"abstract":"Navigating the ultrasound (US) probe to the standardized imaging plane (SIP) for image acquisition is a critical but operator-dependent task in conventional freehand diagnostic US. Robotic US systems (RUSS) offer the potential to enhance imaging consistency by leveraging real-time US image feedback to optimize the probe pose, thereby reducing reliance on operator expertise. However, determining the proper approach to extracting generalizable features from the US images for probe pose adjustment remains challenging. In this work, we propose a SIP navigation framework for RUSS, exemplified in the context of robotic lung ultrasound (LUS). This framework facilitates automatic probe adjustment when in proximity to the SIP. This is achieved by explicitly extracting multiple anatomical features presented in real-time LUS images and performing non-patient-specific template matching to generate probe motion towards the SIP using image-based visual servoing (IBVS). The framework is further integrated with the active-sensing end-effector (A-SEE), a customized robot end-effector that leverages patient external body geometry to maintain optimal probe alignment with the contact surface, thus preserving US signal quality throughout the navigation. The proposed approach ensures procedural interpretability and inter-patient adaptability. Validation is conducted through anatomy-mimicking phantom and in-vivo evaluations involving five human subjects. The results show the framework’s high navigating precision with the probe correctly located at the SIP for all cases, exhibiting positioning error of under 2 mm in translation and under 2 degrees in rotation. These results demonstrate the navigation process’s capability to accommodate anatomical variations among patients. Note to Practitioners—Compared with traditional freehand ultrasound (US) imaging, robotic ultrasound systems (RUSS) have the potential to largely standardize the US diagnosis outcome caused by varying operator expertise if an inter-patient consistent, automatic standardized imaging plane (SIP) navigation process is available. This paper presents a SIP navigation framework for lung US (LUS) examination, which recognizes anatomical landmarks from the US images and fine-tunes the pose of the US probe so that the landmarks are positioned in accordance with a non-patient-specific template image. The special end-effector, active-sensing end-effector (A-SEE), maintains the probe at an optimal orientation with respect to the body, allowing consistent-quality US images to be acquired throughout the navigation. Unlike previous works, our approach can navigate to complicated SIP containing multiple anatomies with interpretable robot arm motion. We verified our framework’s ability to navigate the probe to the SIP with millimeter-level accuracy under phantom and human experiment settings. While preliminary results demonstrate the framework’s efficacy in guiding the robotic LUS procedure, the performance of the system on other examinations (e.g., liver and thyroid US) involving soft tissues requires further validation. In the future, the framework can be applied in various US examinations by implementing specific anatomical feature detection modules.","PeriodicalId":51060,"journal":{"name":"IEEE Transactions on Automation Science and Engineering","volume":"22 ","pages":"6569-6580"},"PeriodicalIF":6.4000,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10670334","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Automation Science and Engineering","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10670334/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

Navigating the ultrasound (US) probe to the standardized imaging plane (SIP) for image acquisition is a critical but operator-dependent task in conventional freehand diagnostic US. Robotic US systems (RUSS) offer the potential to enhance imaging consistency by leveraging real-time US image feedback to optimize the probe pose, thereby reducing reliance on operator expertise. However, determining the proper approach to extracting generalizable features from the US images for probe pose adjustment remains challenging. In this work, we propose a SIP navigation framework for RUSS, exemplified in the context of robotic lung ultrasound (LUS). This framework facilitates automatic probe adjustment when in proximity to the SIP. This is achieved by explicitly extracting multiple anatomical features presented in real-time LUS images and performing non-patient-specific template matching to generate probe motion towards the SIP using image-based visual servoing (IBVS). The framework is further integrated with the active-sensing end-effector (A-SEE), a customized robot end-effector that leverages patient external body geometry to maintain optimal probe alignment with the contact surface, thus preserving US signal quality throughout the navigation. The proposed approach ensures procedural interpretability and inter-patient adaptability. Validation is conducted through anatomy-mimicking phantom and in-vivo evaluations involving five human subjects. The results show the framework’s high navigating precision with the probe correctly located at the SIP for all cases, exhibiting positioning error of under 2 mm in translation and under 2 degrees in rotation. These results demonstrate the navigation process’s capability to accommodate anatomical variations among patients. Note to Practitioners—Compared with traditional freehand ultrasound (US) imaging, robotic ultrasound systems (RUSS) have the potential to largely standardize the US diagnosis outcome caused by varying operator expertise if an inter-patient consistent, automatic standardized imaging plane (SIP) navigation process is available. This paper presents a SIP navigation framework for lung US (LUS) examination, which recognizes anatomical landmarks from the US images and fine-tunes the pose of the US probe so that the landmarks are positioned in accordance with a non-patient-specific template image. The special end-effector, active-sensing end-effector (A-SEE), maintains the probe at an optimal orientation with respect to the body, allowing consistent-quality US images to be acquired throughout the navigation. Unlike previous works, our approach can navigate to complicated SIP containing multiple anatomies with interpretable robot arm motion. We verified our framework’s ability to navigate the probe to the SIP with millimeter-level accuracy under phantom and human experiment settings. While preliminary results demonstrate the framework’s efficacy in guiding the robotic LUS procedure, the performance of the system on other examinations (e.g., liver and thyroid US) involving soft tissues requires further validation. In the future, the framework can be applied in various US examinations by implementing specific anatomical feature detection modules.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
引导最后一厘米:用于机器人肺部超声波标准化成像平面导航的新型解剖感知探头伺服系统
在传统的徒手超声诊断中,将超声探头导航到标准化成像平面(SIP)进行图像采集是一项关键但依赖于操作员的任务。机器人美国系统(RUSS)通过利用实时美国图像反馈来优化探头姿态,从而提高成像一致性,从而减少对操作员专业知识的依赖。然而,确定从美国图像中提取可泛化特征用于探头姿态调整的适当方法仍然具有挑战性。在这项工作中,我们为RUSS提出了一个SIP导航框架,以机器人肺超声(LUS)为例。该框架便于在靠近SIP时自动调整探针。这是通过显式提取实时LUS图像中呈现的多个解剖特征,并使用基于图像的视觉伺服(IBVS)执行非患者特异性模板匹配来生成指向SIP的探针运动来实现的。该框架进一步集成了主动传感末端执行器(a - see),这是一种定制的机器人末端执行器,利用患者的外部身体几何形状来保持最佳的探头与接触面对齐,从而在整个导航过程中保持美国信号质量。所提出的方法确保了程序的可解释性和患者间的适应性。验证是通过模拟解剖的幻影和涉及五名人类受试者的体内评估进行的。结果表明,该框架具有较高的导航精度,在所有情况下,探针都正确地定位在SIP上,平移定位误差小于2 mm,旋转定位误差小于2度。这些结果表明,导航过程的能力,以适应解剖差异的病人。从业人员注意:与传统的徒手超声(US)成像相比,机器人超声系统(RUSS)有可能在很大程度上标准化由不同操作员专业知识引起的超声诊断结果,如果患者之间一致,自动标准化成像平面(SIP)导航过程可用。本文提出了一种用于肺超声(LUS)检查的SIP导航框架,该框架从超声图像中识别解剖地标,并微调超声探头的姿势,以便根据非患者特异性模板图像定位地标。特殊的末端执行器,主动传感末端执行器(A-SEE),保持探头相对于身体的最佳方向,允许在整个导航过程中获得一致质量的美国图像。与以前的工作不同,我们的方法可以导航到包含多个解剖结构的复杂SIP,并具有可解释的机械臂运动。我们验证了我们的框架在幻影和人体实验设置下以毫米级精度将探针导航到SIP的能力。虽然初步结果表明该框架在指导机器人LUS过程中的有效性,但该系统在涉及软组织的其他检查(例如肝脏和甲状腺US)中的性能需要进一步验证。在未来,该框架可以通过实现特定的解剖特征检测模块应用于各种US检查。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
IEEE Transactions on Automation Science and Engineering
IEEE Transactions on Automation Science and Engineering 工程技术-自动化与控制系统
CiteScore
12.50
自引率
14.30%
发文量
404
审稿时长
3.0 months
期刊介绍: The IEEE Transactions on Automation Science and Engineering (T-ASE) publishes fundamental papers on Automation, emphasizing scientific results that advance efficiency, quality, productivity, and reliability. T-ASE encourages interdisciplinary approaches from computer science, control systems, electrical engineering, mathematics, mechanical engineering, operations research, and other fields. T-ASE welcomes results relevant to industries such as agriculture, biotechnology, healthcare, home automation, maintenance, manufacturing, pharmaceuticals, retail, security, service, supply chains, and transportation. T-ASE addresses a research community willing to integrate knowledge across disciplines and industries. For this purpose, each paper includes a Note to Practitioners that summarizes how its results can be applied or how they might be extended to apply in practice.
期刊最新文献
Study on Optimization of Automatic Kidney Suture Strategies Based on Wound Conditions and Suture Paths Zero-Shot Sim-to-Real sEMG-Based Control of Elbow Exoskeletons Using Deep Reinforcement Learning Unbalanced and balanced competition strategy-assisted dual-swarm optimizer for constrained multi-objective optimization Multi-Scale Convolutional Attention Model for GNSS Jamming Recognition Game Theory-based Resilient Control of Time-varying Cyber-Physical Systems Under Hybrid Attacks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1