首页 > 最新文献

IEEE transactions on medical robotics and bionics最新文献

英文 中文
Robotic Bronchoscopy System With Variable-Stiffness Catheter for Pulmonary Lesion Biopsy
IF 3.4 Q2 ENGINEERING, BIOMEDICAL Pub Date : 2025-01-09 DOI: 10.1109/TMRB.2025.3527655
Xing-Yu Chen;Wenjie Lai;Xiaohui Xiong;Xuemiao Wang;Shi-Mei Wang;Peng Li;Weiyi Han;Yangyang Du;Wenke Duan;Wenjing Du;Soo Jay Phee;Lei Wang
Bronchoscopy is a minimally invasive and effective method for early lung cancer diagnosis. Traditional bronchoscopy faces challenges such as limited dexterity, operator fatigue, and difficulty in maintaining steady manipulation. Existing robot-assisted methods have deficiencies, such as tool instability due to the dynamic respiratory environment. This paper presents a teleoperated robotic bronchoscopy system, featuring a controllable variable-stiffness catheter that enhances stability and flexibility during transbronchial biopsies. The 7 DoF robotic system allows for translation, rotation, and bending of the bronchoscope; delivery and bending of the catheter; delivery and control of biopsy tools; as well as stiffness adjustment of the catheter, which adapts to the dynamic pulmonary environment to provide stable support during tissue sampling. Key contributions include the robotic platform integrated with the variable-stiffness catheter and the implementation of a novel three-stage procedure for tissue sampling. The robotic system has been thoroughly evaluated through a series of tests, including the system accuracy, characterization of the variable-stiffness catheter’s flexibility, force exertion, safety during operation, temperature control, and in-vivo experiment. The results demonstrated the system’s feasibility and effectiveness, with metrics such as safe force limits, system flexibility, and positioning accuracy, showing its potential to improve the accuracy and safety of traditional bronchoscopy procedures.
{"title":"Robotic Bronchoscopy System With Variable-Stiffness Catheter for Pulmonary Lesion Biopsy","authors":"Xing-Yu Chen;Wenjie Lai;Xiaohui Xiong;Xuemiao Wang;Shi-Mei Wang;Peng Li;Weiyi Han;Yangyang Du;Wenke Duan;Wenjing Du;Soo Jay Phee;Lei Wang","doi":"10.1109/TMRB.2025.3527655","DOIUrl":"https://doi.org/10.1109/TMRB.2025.3527655","url":null,"abstract":"Bronchoscopy is a minimally invasive and effective method for early lung cancer diagnosis. Traditional bronchoscopy faces challenges such as limited dexterity, operator fatigue, and difficulty in maintaining steady manipulation. Existing robot-assisted methods have deficiencies, such as tool instability due to the dynamic respiratory environment. This paper presents a teleoperated robotic bronchoscopy system, featuring a controllable variable-stiffness catheter that enhances stability and flexibility during transbronchial biopsies. The 7 DoF robotic system allows for translation, rotation, and bending of the bronchoscope; delivery and bending of the catheter; delivery and control of biopsy tools; as well as stiffness adjustment of the catheter, which adapts to the dynamic pulmonary environment to provide stable support during tissue sampling. Key contributions include the robotic platform integrated with the variable-stiffness catheter and the implementation of a novel three-stage procedure for tissue sampling. The robotic system has been thoroughly evaluated through a series of tests, including the system accuracy, characterization of the variable-stiffness catheter’s flexibility, force exertion, safety during operation, temperature control, and in-vivo experiment. The results demonstrated the system’s feasibility and effectiveness, with metrics such as safe force limits, system flexibility, and positioning accuracy, showing its potential to improve the accuracy and safety of traditional bronchoscopy procedures.","PeriodicalId":73318,"journal":{"name":"IEEE transactions on medical robotics and bionics","volume":"7 1","pages":"416-427"},"PeriodicalIF":3.4,"publicationDate":"2025-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143521361","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A High-Fidelity Simulation Framework for Grasping Stability Analysis in Human Casualty Manipulation
IF 3.4 Q2 ENGINEERING, BIOMEDICAL Pub Date : 2025-01-09 DOI: 10.1109/TMRB.2025.3527687
Qianwen Zhao;Rajarshi Roy;Chad Spurlock;Kevin Lister;Long Wang
Recently, there has been a growing interest in rescue robots due to their vital role in addressing emergency scenarios and providing crucial assistance in challenging or hazardous situations where human intervention is problematic. However, very few of these robots are capable of actively engaging with humans and undertaking physical manipulation tasks. This limitation is largely attributed to the absence of tools that can realistically simulate physical interactions, especially the contact mechanisms between a robotic gripper and a human body. In this study, we aim to address key limitations in current developments towards robotic casualty manipulation. Firstly, we present an integrative simulation framework for casualty manipulation. We adapt a finite element method (FEM) tool into the grasping and manipulation scenario, and the developed framework can provide accurate biomechanical reactions resulting from manipulation. Secondly, we conduct a detailed assessment of grasping stability during casualty grasping and manipulation simulations. To validate the necessity and superior performance of the proposed high-fidelity simulation framework, we conducted a qualitative and quantitative comparison of grasping stability analyses between the proposed framework and the state-of-the-art multi-body physics simulations. Through these efforts, we have taken the first step towards a feasible solution for robotic casualty manipulation.
{"title":"A High-Fidelity Simulation Framework for Grasping Stability Analysis in Human Casualty Manipulation","authors":"Qianwen Zhao;Rajarshi Roy;Chad Spurlock;Kevin Lister;Long Wang","doi":"10.1109/TMRB.2025.3527687","DOIUrl":"https://doi.org/10.1109/TMRB.2025.3527687","url":null,"abstract":"Recently, there has been a growing interest in rescue robots due to their vital role in addressing emergency scenarios and providing crucial assistance in challenging or hazardous situations where human intervention is problematic. However, very few of these robots are capable of actively engaging with humans and undertaking physical manipulation tasks. This limitation is largely attributed to the absence of tools that can realistically simulate physical interactions, especially the contact mechanisms between a robotic gripper and a human body. In this study, we aim to address key limitations in current developments towards robotic casualty manipulation. Firstly, we present an integrative simulation framework for casualty manipulation. We adapt a finite element method (FEM) tool into the grasping and manipulation scenario, and the developed framework can provide accurate biomechanical reactions resulting from manipulation. Secondly, we conduct a detailed assessment of grasping stability during casualty grasping and manipulation simulations. To validate the necessity and superior performance of the proposed high-fidelity simulation framework, we conducted a qualitative and quantitative comparison of grasping stability analyses between the proposed framework and the state-of-the-art multi-body physics simulations. Through these efforts, we have taken the first step towards a feasible solution for robotic casualty manipulation.","PeriodicalId":73318,"journal":{"name":"IEEE transactions on medical robotics and bionics","volume":"7 1","pages":"281-289"},"PeriodicalIF":3.4,"publicationDate":"2025-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143521427","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Novel Deployable and Stiffness-Variable Homecare Hyper-Redundant Robot Based on the Origami Mechanism 基于折纸机制的新型可部署且刚度可变的家庭护理超冗余机器人
IF 3.4 Q2 ENGINEERING, BIOMEDICAL Pub Date : 2025-01-09 DOI: 10.1109/TMRB.2025.3527713
Zhenhua Gong;Guangpu Zhu;Ting Zhang
The advantages of hyper-redundant robots lie in their natural flexibility and large deformation, as well as their passive adaptive ability, which shows great potential in medical and nursing applications. However, this feature also makes them weak in scalability and load capacity, making it difficult to complete fine care operations and daily grasping tasks. In this paper, a large deploy/fold ratio variable stiffness hyper-redundant robot based on the origami principle is proposed, which has a large deploy/fold ratio, and realizes large stiffness change based on the bionic muscle-driven variable stiffness principle. Based on the analysis of origami theory, the robot uses rigid origami mechanisms as the skeleton support, flexible gasbags as the backbones, and the hybrid actuation is used to realize the extension, contraction, variable stiffness, and omnidirectional bending motion. Based on the motion/stiffness model of the hyper-redundant robot, the characteristics of the single-joint and the 6-joint hyper-redundant robot are verified by experiments. These experiments confirm that the hyper-redundant robot has a large deploy/fold and variable stiffness range, obtains a large bending deformation and working range, can overcome the gravity generated by itself and the load, and has a high load capacity.
超冗余机器人的优势在于其天然的灵活性和大变形能力,以及被动自适应能力,这在医疗和护理应用中显示出巨大的潜力。但这一特点也使其扩展性和负载能力较弱,难以完成精细护理操作和日常抓取任务。本文提出了一种基于折纸原理的大展开/折叠比可变刚度超冗余机器人,该机器人具有大展开/折叠比,并基于仿生肌肉驱动的可变刚度原理实现了大刚度变化。基于折纸理论分析,该机器人以刚性折纸机构为骨架支撑,以柔性气囊为骨干,采用混合驱动方式实现伸展、收缩、变刚度和全向弯曲运动。根据超冗余机器人的运动/刚度模型,通过实验验证了单关节和六关节超冗余机器人的特性。这些实验证实,超冗余机器人具有较大的展开/折叠和可变刚度范围,能获得较大的弯曲变形和工作范围,能克服自身和负载产生的重力,具有较高的负载能力。
{"title":"A Novel Deployable and Stiffness-Variable Homecare Hyper-Redundant Robot Based on the Origami Mechanism","authors":"Zhenhua Gong;Guangpu Zhu;Ting Zhang","doi":"10.1109/TMRB.2025.3527713","DOIUrl":"https://doi.org/10.1109/TMRB.2025.3527713","url":null,"abstract":"The advantages of hyper-redundant robots lie in their natural flexibility and large deformation, as well as their passive adaptive ability, which shows great potential in medical and nursing applications. However, this feature also makes them weak in scalability and load capacity, making it difficult to complete fine care operations and daily grasping tasks. In this paper, a large deploy/fold ratio variable stiffness hyper-redundant robot based on the origami principle is proposed, which has a large deploy/fold ratio, and realizes large stiffness change based on the bionic muscle-driven variable stiffness principle. Based on the analysis of origami theory, the robot uses rigid origami mechanisms as the skeleton support, flexible gasbags as the backbones, and the hybrid actuation is used to realize the extension, contraction, variable stiffness, and omnidirectional bending motion. Based on the motion/stiffness model of the hyper-redundant robot, the characteristics of the single-joint and the 6-joint hyper-redundant robot are verified by experiments. These experiments confirm that the hyper-redundant robot has a large deploy/fold and variable stiffness range, obtains a large bending deformation and working range, can overcome the gravity generated by itself and the load, and has a high load capacity.","PeriodicalId":73318,"journal":{"name":"IEEE transactions on medical robotics and bionics","volume":"7 1","pages":"66-76"},"PeriodicalIF":3.4,"publicationDate":"2025-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143529896","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Surface Electromyography-Based Speech Detection Amid False Triggers for Artificial Voice Systems in Laryngectomy Patients
IF 3.4 Q2 ENGINEERING, BIOMEDICAL Pub Date : 2025-01-09 DOI: 10.1109/TMRB.2025.3527685
Nevena Musikic;Douglas B. Chepeha;Milos R. Popovic
Laryngectomy, a surgical intervention for laryngeal cancer, effectively treats the condition but results in the loss of natural speech. Voice restoration post-laryngectomy typically involves manual control, limiting patients’ ability to multitask while speaking. Surface electromyography (sEMG) offers a hands-free alternative for controlling artificial voice systems. However, challenges arise from daily, orofacial activities like chewing or coughing, activating the same muscles used for sEMG control, potentially causing false triggers. To address this, we perform a detailed analysis of facial and neck muscles during speech and non-speech activities to identify potential false triggers for sEMG-controlled artificial voice systems. We propose a five-step algorithm to prepare noisy sEMG data for analysis and to detect accurate speech onset and termination times within the muscle activity. A two-stage classification approach is suggested to effectively distinguish speech from non-speech activities. The classifier in the first stage detects the presence of any activity versus non-activity with an F1-score of 95.8%, while the classifier in the second stage recognizes speech among other activities with an F1-score of 96.3%. This research marks a significant advancement in differentiating speech from other daily activities, thereby minimizing false triggers in sEMG-controlled artificial voice systems.
{"title":"Surface Electromyography-Based Speech Detection Amid False Triggers for Artificial Voice Systems in Laryngectomy Patients","authors":"Nevena Musikic;Douglas B. Chepeha;Milos R. Popovic","doi":"10.1109/TMRB.2025.3527685","DOIUrl":"https://doi.org/10.1109/TMRB.2025.3527685","url":null,"abstract":"Laryngectomy, a surgical intervention for laryngeal cancer, effectively treats the condition but results in the loss of natural speech. Voice restoration post-laryngectomy typically involves manual control, limiting patients’ ability to multitask while speaking. Surface electromyography (sEMG) offers a hands-free alternative for controlling artificial voice systems. However, challenges arise from daily, orofacial activities like chewing or coughing, activating the same muscles used for sEMG control, potentially causing false triggers. To address this, we perform a detailed analysis of facial and neck muscles during speech and non-speech activities to identify potential false triggers for sEMG-controlled artificial voice systems. We propose a five-step algorithm to prepare noisy sEMG data for analysis and to detect accurate speech onset and termination times within the muscle activity. A two-stage classification approach is suggested to effectively distinguish speech from non-speech activities. The classifier in the first stage detects the presence of any activity versus non-activity with an F1-score of 95.8%, while the classifier in the second stage recognizes speech among other activities with an F1-score of 96.3%. This research marks a significant advancement in differentiating speech from other daily activities, thereby minimizing false triggers in sEMG-controlled artificial voice systems.","PeriodicalId":73318,"journal":{"name":"IEEE transactions on medical robotics and bionics","volume":"7 1","pages":"404-415"},"PeriodicalIF":3.4,"publicationDate":"2025-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143521348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Ultrasound-Based Human Machine Interfaces for Hand Gesture Recognition: A Scoping Review and Future Direction
IF 3.4 Q2 ENGINEERING, BIOMEDICAL Pub Date : 2024-12-25 DOI: 10.1109/TMRB.2024.3522502
Keshi He
Since ultrasound signal is firstly used to build a human machine interface (HMI) for prosthetic control in 2006, ultrasound-based HMIs have received the great attention in the past 18 years. In this paper, I provide a comprehensive overview of every aspect of an ultrasound-based HMI for hand gesture recognition (HGR). Firstly, I introduce the principle of ultrasound-based HGR and then outline a workflow for an ultrasound-based HMI for HGR and detail each step involved in the workflow, followed by an introduction of performance evaluation and robustness of this type of HMI. Then, I review the research progress of ultrasound-based HMIs for HGR. After that, I introduce the state-of-the-art wearable ultrasound systems for HMIs. Furthermore, I summarize the miscellaneous application of ultrasound-based HMIs for HGR. Finally, I discuss the main research challenges and further envision future research directions for ultrasound-based HMIs for HGR.
{"title":"Ultrasound-Based Human Machine Interfaces for Hand Gesture Recognition: A Scoping Review and Future Direction","authors":"Keshi He","doi":"10.1109/TMRB.2024.3522502","DOIUrl":"https://doi.org/10.1109/TMRB.2024.3522502","url":null,"abstract":"Since ultrasound signal is firstly used to build a human machine interface (HMI) for prosthetic control in 2006, ultrasound-based HMIs have received the great attention in the past 18 years. In this paper, I provide a comprehensive overview of every aspect of an ultrasound-based HMI for hand gesture recognition (HGR). Firstly, I introduce the principle of ultrasound-based HGR and then outline a workflow for an ultrasound-based HMI for HGR and detail each step involved in the workflow, followed by an introduction of performance evaluation and robustness of this type of HMI. Then, I review the research progress of ultrasound-based HMIs for HGR. After that, I introduce the state-of-the-art wearable ultrasound systems for HMIs. Furthermore, I summarize the miscellaneous application of ultrasound-based HMIs for HGR. Finally, I discuss the main research challenges and further envision future research directions for ultrasound-based HMIs for HGR.","PeriodicalId":73318,"journal":{"name":"IEEE transactions on medical robotics and bionics","volume":"7 1","pages":"200-212"},"PeriodicalIF":3.4,"publicationDate":"2024-12-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143521531","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Development of a Reconfigurable 7-DOF Upper Limb Rehabilitation Exoskeleton With Gravity Compensation Based on DMP
IF 3.4 Q2 ENGINEERING, BIOMEDICAL Pub Date : 2024-12-23 DOI: 10.1109/TMRB.2024.3517157
Qingcong Wu;Linliang Zheng;Yanghui Zhu;Zihan Xu;Qiang Zhang;Hongtao Wu
With the development of society, the aging population and the number of stroke patients are increasing year by year. Rehabilitation exoskeletons can help patients carry out rehabilitation training and improve their activities of daily living. First, this paper designs a reconfigurable exoskeleton for upper limb rehabilitation. Second, the working space and singular configuration of the exoskeleton are analyzed. Then, Dynamic Movement Primitives (DMP) and sliding mode control are combined to form a new control strategy. Additionally, by changing the working mode of the gravity compensation device and different control methods, the control experiment of the exoskeleton is carried out. The advantages of sliding mode control under combinational reaching law (CRL-SMC) are verified. The influence of the gravity compensation device on motor driving torque and energy consumption is also analyzed. Finally, experimental results show that compared with sliding mode control under power reaching law (PRL-SMC) and PID control, CRL-SMC has better control performance in single joint trajectory tracking and end trajectory tracking, improving control performance by at least 60%. In the best case, the gravity compensation device can reduce the energy consumption by 81.90% and the maximum motor current by 69.25% of the driving element.
{"title":"Development of a Reconfigurable 7-DOF Upper Limb Rehabilitation Exoskeleton With Gravity Compensation Based on DMP","authors":"Qingcong Wu;Linliang Zheng;Yanghui Zhu;Zihan Xu;Qiang Zhang;Hongtao Wu","doi":"10.1109/TMRB.2024.3517157","DOIUrl":"https://doi.org/10.1109/TMRB.2024.3517157","url":null,"abstract":"With the development of society, the aging population and the number of stroke patients are increasing year by year. Rehabilitation exoskeletons can help patients carry out rehabilitation training and improve their activities of daily living. First, this paper designs a reconfigurable exoskeleton for upper limb rehabilitation. Second, the working space and singular configuration of the exoskeleton are analyzed. Then, Dynamic Movement Primitives (DMP) and sliding mode control are combined to form a new control strategy. Additionally, by changing the working mode of the gravity compensation device and different control methods, the control experiment of the exoskeleton is carried out. The advantages of sliding mode control under combinational reaching law (CRL-SMC) are verified. The influence of the gravity compensation device on motor driving torque and energy consumption is also analyzed. Finally, experimental results show that compared with sliding mode control under power reaching law (PRL-SMC) and PID control, CRL-SMC has better control performance in single joint trajectory tracking and end trajectory tracking, improving control performance by at least 60%. In the best case, the gravity compensation device can reduce the energy consumption by 81.90% and the maximum motor current by 69.25% of the driving element.","PeriodicalId":73318,"journal":{"name":"IEEE transactions on medical robotics and bionics","volume":"7 1","pages":"303-314"},"PeriodicalIF":3.4,"publicationDate":"2024-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143521346","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Direct Camera-Only Bundle Adjustment for 3-D Textured Colon Surface Reconstruction Based on Pre-Operative Model 基于术前模型的三维纹理结肠表面重建的直接相机捆绑调整
IF 3.4 Q2 ENGINEERING, BIOMEDICAL Pub Date : 2024-12-13 DOI: 10.1109/TMRB.2024.3517168
Shuai Zhang;Liang Zhao;Shoudong Huang;Evangelos B. Mazomenos;Danail Stoyanov
This paper addresses the problem of reconstructing textured colon surface maps using a sequence of monocular colonoscopic images together with a 3D colon mesh model that has been segmented in CT colonography. The problem is formulated as a direct bundle adjustment (BA) problem which simultaneously optimizes all camera poses and the intensity of vertices on the pre-operative mesh model. This optimization is achieved by maximizing photometric consistency among multiple views of 2D images and the pre-operative 3D mesh model. The key properties of our proposed direct BA formulation involve eliminating the need for reference image specification, data association (feature extraction and matching), and image depth information. Thus, the proposed method is particularly suitable for scenarios where distinct features and image depth are not available, such as 2D colonoscopic images. Furthermore, we have proven that solving the proposed direct BA using the Gauss-Newton (GN) algorithm has the merit of optimizing camera poses only, which is equivalent to optimizing camera poses and the intensities of 3D vertices on the mesh together. Thus, a direct camera-only BA algorithm is proposed and used for 3D textured colon reconstruction from textureless 2D colonoscopic images. Validations using simulation, phantom, and in-vivo datasets are performed to demonstrate the accuracy and feasibility of the proposed algorithm.
{"title":"Direct Camera-Only Bundle Adjustment for 3-D Textured Colon Surface Reconstruction Based on Pre-Operative Model","authors":"Shuai Zhang;Liang Zhao;Shoudong Huang;Evangelos B. Mazomenos;Danail Stoyanov","doi":"10.1109/TMRB.2024.3517168","DOIUrl":"https://doi.org/10.1109/TMRB.2024.3517168","url":null,"abstract":"This paper addresses the problem of reconstructing textured colon surface maps using a sequence of monocular colonoscopic images together with a 3D colon mesh model that has been segmented in CT colonography. The problem is formulated as a direct bundle adjustment (BA) problem which simultaneously optimizes all camera poses and the intensity of vertices on the pre-operative mesh model. This optimization is achieved by maximizing photometric consistency among multiple views of 2D images and the pre-operative 3D mesh model. The key properties of our proposed direct BA formulation involve eliminating the need for reference image specification, data association (feature extraction and matching), and image depth information. Thus, the proposed method is particularly suitable for scenarios where distinct features and image depth are not available, such as 2D colonoscopic images. Furthermore, we have proven that solving the proposed direct BA using the Gauss-Newton (GN) algorithm has the merit of optimizing camera poses only, which is equivalent to optimizing camera poses and the intensities of 3D vertices on the mesh together. Thus, a direct camera-only BA algorithm is proposed and used for 3D textured colon reconstruction from textureless 2D colonoscopic images. Validations using simulation, phantom, and in-vivo datasets are performed to demonstrate the accuracy and feasibility of the proposed algorithm.","PeriodicalId":73318,"journal":{"name":"IEEE transactions on medical robotics and bionics","volume":"7 1","pages":"242-253"},"PeriodicalIF":3.4,"publicationDate":"2024-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143521428","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Human Control of Underactuated Objects: Adaptation to Uncertain Nonlinear Dynamics Ensures Stability
IF 3.4 Q2 ENGINEERING, BIOMEDICAL Pub Date : 2024-12-13 DOI: 10.1109/TMRB.2024.3517172
Rakshith Lokesh;Dagmar Sternad
Humans frequently interact with objects that have dynamic complexity, like a cup of coffee. Such systems are nonlinear and underactuated, potentially creating unstable dynamics. Instabilities generate complex interaction forces that render the system unpredictable. And yet, humans interact with these objects with ease. Nonlinear dynamic analysis shows that the initial conditions and frequencies of input forces determine the system’s stability. Taking inspiration from carrying a cup of coffee, participants rhythmically moved a cup with a ball rolling inside which was modeled as a cart-pendulum system. They were encouraged to prepare the cup-and-ball system by ‘jiggling’ the cup before moving it back and forth on a horizontal line. We tested the hypothesis that humans initialize the system and choose interaction frequencies that stabilize their interactions. To create uncertainty about the specific cup-and-ball system, the pendulum length was varied without providing cues to the participant. Stability was quantified by variability of relative phase between cup and ball. Results showed that participants nonlinearly co-varied the initial ball angle at the end of preparation and the cup frequency during the rhythmic phase. Mapping participants’ choices onto the highly nonlinear manifold of stable solutions generated by forward-simulations verified that they indeed achieved stable solutions.
人类经常与具有动态复杂性的物体(如一杯咖啡)进行交互。这种系统是非线性的,驱动力不足,可能会产生不稳定的动态。不稳定性会产生复杂的相互作用力,使系统变得不可预测。然而,人类却能轻松地与这些物体互动。非线性动力学分析表明,输入力的初始条件和频率决定了系统的稳定性。受端杯咖啡的启发,参与者有节奏地移动一个杯,杯内滚动着一个球,这被模拟为一个推车-摆系统。我们鼓励他们在水平线上来回移动杯子之前,先 "晃动 "杯子,为杯球系统做好准备。我们测试了这样一个假设:人类会初始化系统,并选择能够稳定其互动的互动频率。为了给特定的杯球系统制造不确定性,我们在不向参与者提供提示的情况下改变了摆锤的长度。稳定性通过杯和球之间相对相位的变化来量化。结果表明,参与者在准备阶段结束时的初始球角度与节奏阶段的杯频率呈非线性共变。将参与者的选择映射到正向模拟产生的高度非线性的稳定解决方案流形上,验证了他们确实获得了稳定的解决方案。
{"title":"Human Control of Underactuated Objects: Adaptation to Uncertain Nonlinear Dynamics Ensures Stability","authors":"Rakshith Lokesh;Dagmar Sternad","doi":"10.1109/TMRB.2024.3517172","DOIUrl":"https://doi.org/10.1109/TMRB.2024.3517172","url":null,"abstract":"Humans frequently interact with objects that have dynamic complexity, like a cup of coffee. Such systems are nonlinear and underactuated, potentially creating unstable dynamics. Instabilities generate complex interaction forces that render the system unpredictable. And yet, humans interact with these objects with ease. Nonlinear dynamic analysis shows that the initial conditions and frequencies of input forces determine the system’s stability. Taking inspiration from carrying a cup of coffee, participants rhythmically moved a cup with a ball rolling inside which was modeled as a cart-pendulum system. They were encouraged to prepare the cup-and-ball system by ‘jiggling’ the cup before moving it back and forth on a horizontal line. We tested the hypothesis that humans initialize the system and choose interaction frequencies that stabilize their interactions. To create uncertainty about the specific cup-and-ball system, the pendulum length was varied without providing cues to the participant. Stability was quantified by variability of relative phase between cup and ball. Results showed that participants nonlinearly co-varied the initial ball angle at the end of preparation and the cup frequency during the rhythmic phase. Mapping participants’ choices onto the highly nonlinear manifold of stable solutions generated by forward-simulations verified that they indeed achieved stable solutions.","PeriodicalId":73318,"journal":{"name":"IEEE transactions on medical robotics and bionics","volume":"7 1","pages":"6-12"},"PeriodicalIF":3.4,"publicationDate":"2024-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143529872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Desmoking of the Endoscopic Surgery Images Based on a Local-Global U-Shaped Transformer Model
IF 3.4 Q2 ENGINEERING, BIOMEDICAL Pub Date : 2024-12-13 DOI: 10.1109/TMRB.2024.3517139
Wanqing Wang;Fucheng Liu;Jianxiong Hao;Xiangyang Yu;Bo Zhang;Chaoyang Shi
In robot-assisted minimally invasive surgery (RMIS), the smoke generated by energy-based surgical instruments blurs and obstructs the endoscopic surgical field, which increases the difficulty and risk of robotic surgery. However, current desmoking research primarily focuses on natural weather conditions, with limited studies addressing desmoking techniques for endoscopic images. Furthermore, surgical smoke presents a notably intricate morphology, and research efforts aimed at uniform, non-uniform, thin, and dense smoke remain relatively limited. This work proposes a Local-Global U-Shaped Transformer Model (LGUformer) based on the U-Net and Transformer architectures to remove complex smoke from endoscopic images. By introducing a local-global multi-head self-attention mechanism and multi-scale depthwise convolution, the proposed model enhances the inference capability. An enhanced feature map fusion method improves the quality of reconstructed images. The improved modules enable efficient handling of variable smoke while generating superior-quality images. Through desmoking experiments on synthetic and real smoke images, the LGUformer model demonstrated superior performance compared with seven other desmoking models in terms of accuracy, clarity, absence of distortion, and robustness. A task-based surgical instrument segmentation experiment indicated the potential of this model as a pre-processing step in visual tasks. Finally, an ablation study was conducted to verify the advantages of the proposed modules.
在机器人辅助微创手术(RMIS)中,基于能量的手术器械产生的烟雾会模糊和阻碍内窥镜手术视野,从而增加机器人手术的难度和风险。然而,目前的除烟研究主要集中在自然天气条件下,针对内窥镜图像除烟技术的研究非常有限。此外,手术烟雾呈现出明显的复杂形态,针对均匀、非均匀、稀薄和浓密烟雾的研究仍然相对有限。本研究提出了一种基于 U-Net 和 Transformer 架构的局部-全局 U 形变换器模型(LGUformer),用于去除内窥镜图像中的复杂烟雾。通过引入局部-全局多头自关注机制和多尺度深度卷积,该模型增强了推理能力。增强型特征图融合方法提高了重建图像的质量。改进后的模块能够有效处理可变烟雾,同时生成高质量的图像。通过对合成和真实烟雾图像进行除烟实验,LGUformer 模型与其他七个除烟模型相比,在准确性、清晰度、无失真和鲁棒性方面都表现出了卓越的性能。基于任务的手术器械分割实验表明,该模型具有在视觉任务中作为预处理步骤的潜力。最后,还进行了一项消融研究,以验证拟议模块的优势。
{"title":"Desmoking of the Endoscopic Surgery Images Based on a Local-Global U-Shaped Transformer Model","authors":"Wanqing Wang;Fucheng Liu;Jianxiong Hao;Xiangyang Yu;Bo Zhang;Chaoyang Shi","doi":"10.1109/TMRB.2024.3517139","DOIUrl":"https://doi.org/10.1109/TMRB.2024.3517139","url":null,"abstract":"In robot-assisted minimally invasive surgery (RMIS), the smoke generated by energy-based surgical instruments blurs and obstructs the endoscopic surgical field, which increases the difficulty and risk of robotic surgery. However, current desmoking research primarily focuses on natural weather conditions, with limited studies addressing desmoking techniques for endoscopic images. Furthermore, surgical smoke presents a notably intricate morphology, and research efforts aimed at uniform, non-uniform, thin, and dense smoke remain relatively limited. This work proposes a Local-Global U-Shaped Transformer Model (LGUformer) based on the U-Net and Transformer architectures to remove complex smoke from endoscopic images. By introducing a local-global multi-head self-attention mechanism and multi-scale depthwise convolution, the proposed model enhances the inference capability. An enhanced feature map fusion method improves the quality of reconstructed images. The improved modules enable efficient handling of variable smoke while generating superior-quality images. Through desmoking experiments on synthetic and real smoke images, the LGUformer model demonstrated superior performance compared with seven other desmoking models in terms of accuracy, clarity, absence of distortion, and robustness. A task-based surgical instrument segmentation experiment indicated the potential of this model as a pre-processing step in visual tasks. Finally, an ablation study was conducted to verify the advantages of the proposed modules.","PeriodicalId":73318,"journal":{"name":"IEEE transactions on medical robotics and bionics","volume":"7 1","pages":"254-265"},"PeriodicalIF":3.4,"publicationDate":"2024-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143521564","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptive Graph Learning From Spatial Information for Surgical Workflow Anticipation
IF 3.4 Q2 ENGINEERING, BIOMEDICAL Pub Date : 2024-12-13 DOI: 10.1109/TMRB.2024.3517137
Francis Xiatian Zhang;Jingjing Deng;Robert Lieck;Hubert P. H. Shum
Surgical workflow anticipation is the task of predicting the timing of relevant surgical events from live video data, which is critical in Robotic-Assisted Surgery (RAS). Accurate predictions require the use of spatial information to model surgical interactions. However, current methods focus solely on surgical instruments, assume static interactions between instruments, and only anticipate surgical events within a fixed time horizon. To address these challenges, we propose an adaptive graph learning framework for surgical workflow anticipation based on a novel spatial representation, featuring three key innovations. First, we introduce a new representation of spatial information based on bounding boxes of surgical instruments and targets, including their detection confidence levels. These are trained on additional annotations we provide for two benchmark datasets. Second, we design an adaptive graph learning method to capture dynamic interactions. Third, we develop a multi-horizon objective that balances learning objectives for different time horizons, allowing for unconstrained predictions. Evaluations on two benchmarks reveal superior performance in short-to-mid-term anticipation, with an error reduction of approximately 3% for surgical phase anticipation and 9% for remaining surgical duration anticipation. These performance improvements demonstrate the effectiveness of our method and highlight its potential for enhancing preparation and coordination within the RAS team. This can improve surgical safety and the efficiency of operating room usage.
手术工作流程预测是从实时视频数据中预测相关手术事件发生时间的任务,这在机器人辅助手术(RAS)中至关重要。准确的预测需要使用空间信息来模拟手术互动。然而,目前的方法只关注手术器械,假设器械之间存在静态交互,并且只能预测固定时间范围内的手术事件。为了应对这些挑战,我们提出了一种基于新型空间表示法的手术工作流程预测自适应图学习框架,该框架有三个关键创新点。首先,我们基于手术器械和目标的边界框(包括其检测置信度)引入了一种新的空间信息表示法。这些都是根据我们为两个基准数据集提供的附加注释进行训练的。其次,我们设计了一种自适应图学习方法来捕捉动态交互。第三,我们开发了一种多视距目标,可平衡不同时间视距的学习目标,从而实现无约束预测。在两个基准上进行的评估显示,中短期预测性能优越,手术阶段预测的误差减少了约 3%,剩余手术时间预测的误差减少了 9%。这些性能的提高证明了我们方法的有效性,并突出了其在加强 RAS 团队内部准备和协调方面的潜力。这可以提高手术安全性和手术室的使用效率。
{"title":"Adaptive Graph Learning From Spatial Information for Surgical Workflow Anticipation","authors":"Francis Xiatian Zhang;Jingjing Deng;Robert Lieck;Hubert P. H. Shum","doi":"10.1109/TMRB.2024.3517137","DOIUrl":"https://doi.org/10.1109/TMRB.2024.3517137","url":null,"abstract":"Surgical workflow anticipation is the task of predicting the timing of relevant surgical events from live video data, which is critical in Robotic-Assisted Surgery (RAS). Accurate predictions require the use of spatial information to model surgical interactions. However, current methods focus solely on surgical instruments, assume static interactions between instruments, and only anticipate surgical events within a fixed time horizon. To address these challenges, we propose an adaptive graph learning framework for surgical workflow anticipation based on a novel spatial representation, featuring three key innovations. First, we introduce a new representation of spatial information based on bounding boxes of surgical instruments and targets, including their detection confidence levels. These are trained on additional annotations we provide for two benchmark datasets. Second, we design an adaptive graph learning method to capture dynamic interactions. Third, we develop a multi-horizon objective that balances learning objectives for different time horizons, allowing for unconstrained predictions. Evaluations on two benchmarks reveal superior performance in short-to-mid-term anticipation, with an error reduction of approximately 3% for surgical phase anticipation and 9% for remaining surgical duration anticipation. These performance improvements demonstrate the effectiveness of our method and highlight its potential for enhancing preparation and coordination within the RAS team. This can improve surgical safety and the efficiency of operating room usage.","PeriodicalId":73318,"journal":{"name":"IEEE transactions on medical robotics and bionics","volume":"7 1","pages":"266-280"},"PeriodicalIF":3.4,"publicationDate":"2024-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143521336","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE transactions on medical robotics and bionics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1