Pub Date : 2023-04-19DOI: 10.1109/ISMR57123.2023.10130196
Edoardo Battaglia, A. M. Fey
Visualization of hand movement is an important part of many medical training, VR and haptic experiences, which researchers typically address by developing application specific hand visualization tools. While some existing simulators allow for hand kinematic visualization using a generic hand model, they are usual targeted for robotic grasp planning rather than designed specifically for rendering in applications that include haptic experiences. To fill this gap, in this paper we present cHand, an extension of the haptics software library CHAI3D, that enables hand kinematic visualization of an arbitrary hand model. A representation of the hand can be achieved with elementary geometric shapes that are provided by CHAI3D, or with custom geometries loaded from STL files. We release cHand as an open source contribution to keep with the open source nature of CHAI3D, and present a tutorial on its use in this manuscript.
{"title":"cHand: Open Source Hand Posture Visualization in CHAI3D","authors":"Edoardo Battaglia, A. M. Fey","doi":"10.1109/ISMR57123.2023.10130196","DOIUrl":"https://doi.org/10.1109/ISMR57123.2023.10130196","url":null,"abstract":"Visualization of hand movement is an important part of many medical training, VR and haptic experiences, which researchers typically address by developing application specific hand visualization tools. While some existing simulators allow for hand kinematic visualization using a generic hand model, they are usual targeted for robotic grasp planning rather than designed specifically for rendering in applications that include haptic experiences. To fill this gap, in this paper we present cHand, an extension of the haptics software library CHAI3D, that enables hand kinematic visualization of an arbitrary hand model. A representation of the hand can be achieved with elementary geometric shapes that are provided by CHAI3D, or with custom geometries loaded from STL files. We release cHand as an open source contribution to keep with the open source nature of CHAI3D, and present a tutorial on its use in this manuscript.","PeriodicalId":276757,"journal":{"name":"2023 International Symposium on Medical Robotics (ISMR)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115304504","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-19DOI: 10.1109/ISMR57123.2023.10130247
Saman Vafadar, Elie Saghbiny, Antoine Harlé, G. Morel
Pedicle screw placement is a crucial phase in various spine surgical procedures. In the recent years, robotic solutions have been proposed to assist it. They require intra-operative registration based on CT or fluoroscopic imaging, raising risks for patients and surgeons. In this study, we investigated registration methods that remove the need for intraoperative imaging. They involve a robot holding a mechanical probe whose tip contacts the bone at sparse locations. This involves either surgeon's manual guidance, or automatic force-control based probing. Further, once the anatomy is registered, we automate the entire process, including the pedicle preparation and drilling, with the same force controlled robot. Ten drillings were performed in five lumbar vertebrae of a porcine sample using a custom-designed instrument mounted on the robot's end-effector. Preoprative and postoperative scans were performed to evaluate the registrations and drillings quantitatively. The mean difference between the planned and postop-measured drilling orientations was $2.2^{circ} ({text{Max}}. 4.4^{circ})$. The mean distance between the planned entry points and the postop-measured drilling paths was $2.2 mm (text{Max}. 4.1 mm)$. These results open perspectives for X-ray free robotic operations and pedicle screw placement automation.
{"title":"Using a Force-Controlled Robot for Probing-Based Registration and Automated Bone Drilling in Pedicle Screw Placement Procedures","authors":"Saman Vafadar, Elie Saghbiny, Antoine Harlé, G. Morel","doi":"10.1109/ISMR57123.2023.10130247","DOIUrl":"https://doi.org/10.1109/ISMR57123.2023.10130247","url":null,"abstract":"Pedicle screw placement is a crucial phase in various spine surgical procedures. In the recent years, robotic solutions have been proposed to assist it. They require intra-operative registration based on CT or fluoroscopic imaging, raising risks for patients and surgeons. In this study, we investigated registration methods that remove the need for intraoperative imaging. They involve a robot holding a mechanical probe whose tip contacts the bone at sparse locations. This involves either surgeon's manual guidance, or automatic force-control based probing. Further, once the anatomy is registered, we automate the entire process, including the pedicle preparation and drilling, with the same force controlled robot. Ten drillings were performed in five lumbar vertebrae of a porcine sample using a custom-designed instrument mounted on the robot's end-effector. Preoprative and postoperative scans were performed to evaluate the registrations and drillings quantitatively. The mean difference between the planned and postop-measured drilling orientations was $2.2^{circ} ({text{Max}}. 4.4^{circ})$. The mean distance between the planned entry points and the postop-measured drilling paths was $2.2 mm (text{Max}. 4.1 mm)$. These results open perspectives for X-ray free robotic operations and pedicle screw placement automation.","PeriodicalId":276757,"journal":{"name":"2023 International Symposium on Medical Robotics (ISMR)","volume":"82 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122399441","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-19DOI: 10.1109/ISMR57123.2023.10130195
K. Dharmarajan, Will Panitch, Baiyu Shi, Huang Huang, Lawrence Yunliang Chen, Thomas Low, Danyal Fer, Ken Goldberg
Vascular shunt insertion is a common surgical procedure requiring a surgeon-and-surgical-assistant team performed to temporarily restore blood flow to damaged tissues. Robotic assistance for this procedure is challenging due to precision and control uncertainty. The role of the robot in this task depends on the availability of a human surgeon. We propose a trimodal framework for vascular shunt insertion assisted by a dVRK robotic surgical assistant. We consider three scenarios: (1) a surgeon is available locally; (2) a remote surgeon is available via teleoperation; (3) no surgeon is available. In each scenario, the robot operates in a different mode either by teleoperation or automation. For mode (1), a learned visual servoing policy is proposed for vessel grasping. Physical experiments demonstrate a success rate of 70%-100% for mode (1), 100% for mode (2), and 80%-95% for mode (3).
{"title":"A Trimodal Framework for Robot-Assisted Vascular Shunt Insertion When a Supervising Surgeon is Local, Remote, or Unavailable","authors":"K. Dharmarajan, Will Panitch, Baiyu Shi, Huang Huang, Lawrence Yunliang Chen, Thomas Low, Danyal Fer, Ken Goldberg","doi":"10.1109/ISMR57123.2023.10130195","DOIUrl":"https://doi.org/10.1109/ISMR57123.2023.10130195","url":null,"abstract":"Vascular shunt insertion is a common surgical procedure requiring a surgeon-and-surgical-assistant team performed to temporarily restore blood flow to damaged tissues. Robotic assistance for this procedure is challenging due to precision and control uncertainty. The role of the robot in this task depends on the availability of a human surgeon. We propose a trimodal framework for vascular shunt insertion assisted by a dVRK robotic surgical assistant. We consider three scenarios: (1) a surgeon is available locally; (2) a remote surgeon is available via teleoperation; (3) no surgeon is available. In each scenario, the robot operates in a different mode either by teleoperation or automation. For mode (1), a learned visual servoing policy is proposed for vessel grasping. Physical experiments demonstrate a success rate of 70%-100% for mode (1), 100% for mode (2), and 80%-95% for mode (3).","PeriodicalId":276757,"journal":{"name":"2023 International Symposium on Medical Robotics (ISMR)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129439931","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-19DOI: 10.1109/ISMR57123.2023.10130176
Yash Chitalia, A. Donder, P. Dupont
Mechanics-based models have been developed to describe the shape of tendon-actuated continuum robots. Models have also been developed to describe the shape of concentric tube robots, i.e., nested combinations of precurved superelastic tubes. While an important class of continuum robots used in endoscopic and intracardiac medical applications combines these two designs, existing models do not cover this combination. Tendon-actuated models are limited to a single tube while concentric tube models do not include tendon-produced forces and moments. This paper derives a mechanics-based model for this hybrid design and assesses it using numerical and physical experiments involving a pair of tendon-actuated tubes. It is demonstrated that, similar to concentric tube robots, relative twisting between the tendon-actuated tubes is an important factor in determining overall robot shape.
{"title":"Modeling Tendon-actuated Concentric Tube Robots","authors":"Yash Chitalia, A. Donder, P. Dupont","doi":"10.1109/ISMR57123.2023.10130176","DOIUrl":"https://doi.org/10.1109/ISMR57123.2023.10130176","url":null,"abstract":"Mechanics-based models have been developed to describe the shape of tendon-actuated continuum robots. Models have also been developed to describe the shape of concentric tube robots, i.e., nested combinations of precurved superelastic tubes. While an important class of continuum robots used in endoscopic and intracardiac medical applications combines these two designs, existing models do not cover this combination. Tendon-actuated models are limited to a single tube while concentric tube models do not include tendon-produced forces and moments. This paper derives a mechanics-based model for this hybrid design and assesses it using numerical and physical experiments involving a pair of tendon-actuated tubes. It is demonstrated that, similar to concentric tube robots, relative twisting between the tendon-actuated tubes is an important factor in determining overall robot shape.","PeriodicalId":276757,"journal":{"name":"2023 International Symposium on Medical Robotics (ISMR)","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127931749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-19DOI: 10.1109/ISMR57123.2023.10130271
Florian Heemeyer, C. Chautems, Q. Boehler, J. Merino, Bradley J. Nelson
Cardiac arrhythmia refers to a condition of an abnormal or irregular heartbeat, which usually results in disturbed blood flow. This can lead to a reduced cardiac output and an increased risk for blood clot formation, which can cause life-threatening heart failure or stroke. Radio-frequency catheter ablation is becoming the treatment of choice for most cardiac arrhythmias. To recover the normal heartbeat, the abnormal excitation sites inside the heart are first identified and then isolated or destroyed by applying radio-frequency energy through the use of an ablation catheter. The design and precise navigation of these ablation catheters is currently a topic of considerable interest. However, the lack of a standardized evaluation setup for catheter ablation makes it challenging to properly compare different systems. In this paper, we present an evaluation platform to tackle this problem. The setup consists of a 3D printed anatomical model, a tracking system for the catheter and a dedicated graphical user interface. The performance of a catheter ablation system can then be assessed based on various performance metrics, such as the procedure duration, contact stability or the ablation angle. Finally, we conduct a proof-of-concept study to demonstrate the usefulness of the proposed setup.
{"title":"An Evaluation Platform for Catheter Ablation Navigation","authors":"Florian Heemeyer, C. Chautems, Q. Boehler, J. Merino, Bradley J. Nelson","doi":"10.1109/ISMR57123.2023.10130271","DOIUrl":"https://doi.org/10.1109/ISMR57123.2023.10130271","url":null,"abstract":"Cardiac arrhythmia refers to a condition of an abnormal or irregular heartbeat, which usually results in disturbed blood flow. This can lead to a reduced cardiac output and an increased risk for blood clot formation, which can cause life-threatening heart failure or stroke. Radio-frequency catheter ablation is becoming the treatment of choice for most cardiac arrhythmias. To recover the normal heartbeat, the abnormal excitation sites inside the heart are first identified and then isolated or destroyed by applying radio-frequency energy through the use of an ablation catheter. The design and precise navigation of these ablation catheters is currently a topic of considerable interest. However, the lack of a standardized evaluation setup for catheter ablation makes it challenging to properly compare different systems. In this paper, we present an evaluation platform to tackle this problem. The setup consists of a 3D printed anatomical model, a tracking system for the catheter and a dedicated graphical user interface. The performance of a catheter ablation system can then be assessed based on various performance metrics, such as the procedure duration, contact stability or the ablation angle. Finally, we conduct a proof-of-concept study to demonstrate the usefulness of the proposed setup.","PeriodicalId":276757,"journal":{"name":"2023 International Symposium on Medical Robotics (ISMR)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132362139","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-19DOI: 10.1109/ISMR57123.2023.10130199
Yiwei Jiang, Haoying Zhou, G. Fischer
Advancements in robot-assisted surgery have been rapidly growing since two decades ago. More recently, the automation of robotic surgical tasks has become the focus of research. In this area, the detection and tracking of a surgical tool are crucial for an autonomous system to plan and perform a procedure. For example, knowing the position and posture of a needle is a prerequisite for an automatic suturing system to grasp it and perform suturing tasks. In this paper, we proposed a novel method, based on Deep Learning and Point-to-point Registration, to track the 6 degrees of freedom (DOF) pose of a metal suture needle from a robotic endoscope (an Endoscopic Camera Manipulator from the da Vinci Robotic Surgical Systems), without the help of any marker. The proposed approach was implemented and evaluated in a standard simulated surgical environment provided by the 2021–2022 AccelNet Surgical Robotics Challenge, thus demonstrates the potential to be translated into a real-world scenario. A customized dataset containing 836 images collected from the simulated scene with ground truth of poses and key points information was constructed to train the neural network model. The best pipeline achieved an average position error of 1.76 mm while the average orientation error is 8.55 degrees, and it can run up to 10 Hz on a PC.
{"title":"Markerless Suture Needle Tracking From A Robotic Endoscope Based On Deep Learning","authors":"Yiwei Jiang, Haoying Zhou, G. Fischer","doi":"10.1109/ISMR57123.2023.10130199","DOIUrl":"https://doi.org/10.1109/ISMR57123.2023.10130199","url":null,"abstract":"Advancements in robot-assisted surgery have been rapidly growing since two decades ago. More recently, the automation of robotic surgical tasks has become the focus of research. In this area, the detection and tracking of a surgical tool are crucial for an autonomous system to plan and perform a procedure. For example, knowing the position and posture of a needle is a prerequisite for an automatic suturing system to grasp it and perform suturing tasks. In this paper, we proposed a novel method, based on Deep Learning and Point-to-point Registration, to track the 6 degrees of freedom (DOF) pose of a metal suture needle from a robotic endoscope (an Endoscopic Camera Manipulator from the da Vinci Robotic Surgical Systems), without the help of any marker. The proposed approach was implemented and evaluated in a standard simulated surgical environment provided by the 2021–2022 AccelNet Surgical Robotics Challenge, thus demonstrates the potential to be translated into a real-world scenario. A customized dataset containing 836 images collected from the simulated scene with ground truth of poses and key points information was constructed to train the neural network model. The best pipeline achieved an average position error of 1.76 mm while the average orientation error is 8.55 degrees, and it can run up to 10 Hz on a PC.","PeriodicalId":276757,"journal":{"name":"2023 International Symposium on Medical Robotics (ISMR)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124067517","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-19DOI: 10.1109/ISMR57123.2023.10130220
Nidhi Malhotra, Kimberly Hoang, J. Desai
Surgical resection is one of the primary treatments for patients with brain tumors. The precise detection of tumor tissue margin during the resection procedure ensures safe maximal removal of the tumor tissue while protecting the surrounding healthy, functional tissue. Micro-electro-mechanical-system (MEMS)-based sensors with high sensitivity and small footprint can enable reliable intraoperative in vivo tumor margin assessment, quantitatively distinguishing between normal and abnormal tissue. In this paper, we present the design of a MEMS-based piezoresistive force sensor integrated into a steerable robotic probe. The sensor design optimization, microfabrication process flow, and packaging methodology are presented. The characterization process of the diaphragm-based piezoresistive sensor using a commercial force sensor and an indentation system is described. The sensor can measure forces in the range of 0-0.3N, and the packaged sensor is within 2 mm diameter. The sensor's output shows a linear response with force, and the sensor has a maximum hysteresis of 5.58 %.
{"title":"Towards the development of a MEMS-based force sensor for in vivo tumor tissue demarcation","authors":"Nidhi Malhotra, Kimberly Hoang, J. Desai","doi":"10.1109/ISMR57123.2023.10130220","DOIUrl":"https://doi.org/10.1109/ISMR57123.2023.10130220","url":null,"abstract":"Surgical resection is one of the primary treatments for patients with brain tumors. The precise detection of tumor tissue margin during the resection procedure ensures safe maximal removal of the tumor tissue while protecting the surrounding healthy, functional tissue. Micro-electro-mechanical-system (MEMS)-based sensors with high sensitivity and small footprint can enable reliable intraoperative in vivo tumor margin assessment, quantitatively distinguishing between normal and abnormal tissue. In this paper, we present the design of a MEMS-based piezoresistive force sensor integrated into a steerable robotic probe. The sensor design optimization, microfabrication process flow, and packaging methodology are presented. The characterization process of the diaphragm-based piezoresistive sensor using a commercial force sensor and an indentation system is described. The sensor can measure forces in the range of 0-0.3N, and the packaged sensor is within 2 mm diameter. The sensor's output shows a linear response with force, and the sensor has a maximum hysteresis of 5.58 %.","PeriodicalId":276757,"journal":{"name":"2023 International Symposium on Medical Robotics (ISMR)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129173862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-19DOI: 10.1109/ISMR57123.2023.10130197
Siddhartha Kapuria, Tarunraj G. Mohanraj, Nethra Venkatayogi, Ozdemir Can Kara, Y. Hirata, P. Minot, Ariel Kapusta, N. Ikoma, F. Alambeigi
In this study, toward addressing the over-confident outputs of existing artificial intelligence-based colorectal cancer (CRC) polyp classification techniques, we propose a confidence-calibrated residual neural network. Utilizing a novel vision-based tactile sensing (VS-TS) system and unique CRC polyp phantoms, we demonstrate that traditional metrics such as accuracy and precision are not sufficient to encapsulate model performance for handling a sensitive CRC polyp diagnosis. To this end, we develop a residual neural network classifier and address its over-confident outputs for CRC polyps classification via the post-processing method of temperature scaling. To evaluate the proposed method, we introduce noise and blur to the obtained textural images of the VSTS and test the model's reliability for non-ideal inputs through reliability diagrams and other statistical metrics.
{"title":"Towards Reliable Colorectal Cancer Polyps Classification via Vision Based Tactile Sensing and Confidence-Calibrated Neural Networks","authors":"Siddhartha Kapuria, Tarunraj G. Mohanraj, Nethra Venkatayogi, Ozdemir Can Kara, Y. Hirata, P. Minot, Ariel Kapusta, N. Ikoma, F. Alambeigi","doi":"10.1109/ISMR57123.2023.10130197","DOIUrl":"https://doi.org/10.1109/ISMR57123.2023.10130197","url":null,"abstract":"In this study, toward addressing the over-confident outputs of existing artificial intelligence-based colorectal cancer (CRC) polyp classification techniques, we propose a confidence-calibrated residual neural network. Utilizing a novel vision-based tactile sensing (VS-TS) system and unique CRC polyp phantoms, we demonstrate that traditional metrics such as accuracy and precision are not sufficient to encapsulate model performance for handling a sensitive CRC polyp diagnosis. To this end, we develop a residual neural network classifier and address its over-confident outputs for CRC polyps classification via the post-processing method of temperature scaling. To evaluate the proposed method, we introduce noise and blur to the obtained textural images of the VSTS and test the model's reliability for non-ideal inputs through reliability diagrams and other statistical metrics.","PeriodicalId":276757,"journal":{"name":"2023 International Symposium on Medical Robotics (ISMR)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125528753","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-19DOI: 10.1109/ISMR57123.2023.10130208
Yuyu Cai, Ayoob Davoodi, Ruixuan Li, M. Ourak, K. Niu, J. Deprest, E. V. Poorten
Twin to Twin Transfusion Syndrome (TTTS) is one of the most severe conditions of congenital anomaly. Fetoscopic Laser Photocoagulation (FLP) has been shown as the preferred treatment for TTTS. The complicated intrauterine environment and limited incision diameter make FLP a very challenging surgery. Besides, the motion of the fetoscope is complex. Ultrasound (US) probe is used to offer a view of the anatomy as well as the relative pose of the fetoscope. However, keeping the fetoscope tip (FT) in view is difficult. A robotic US system could potentially solve the complex US probe handling and could improve the surgeon's view of the intrauterine scene. In this paper, an automatic fetoscope tracking algorithm was developed and ensured safe interactions between robot and fragile tissues for FLP surgery. The US probe attached to the robot was controlled by a hybrid position-force control strategy. For evaluation, four fetoscope motion profiles were investigated on a custom-designed phantom. The system achieved a 78.87% visibility rate for the US image evaluation. The contact force was 0.99± 0.68 N during the tracking to ensure the patient's safety. The proposed system demonstrates the capability of accurate real-time fetoscope tracking.
{"title":"Development of Robot-assisted Ultrasound System for Fetoscopic Tracking in Twin to Twin Transfusion Syndrome Surgery","authors":"Yuyu Cai, Ayoob Davoodi, Ruixuan Li, M. Ourak, K. Niu, J. Deprest, E. V. Poorten","doi":"10.1109/ISMR57123.2023.10130208","DOIUrl":"https://doi.org/10.1109/ISMR57123.2023.10130208","url":null,"abstract":"Twin to Twin Transfusion Syndrome (TTTS) is one of the most severe conditions of congenital anomaly. Fetoscopic Laser Photocoagulation (FLP) has been shown as the preferred treatment for TTTS. The complicated intrauterine environment and limited incision diameter make FLP a very challenging surgery. Besides, the motion of the fetoscope is complex. Ultrasound (US) probe is used to offer a view of the anatomy as well as the relative pose of the fetoscope. However, keeping the fetoscope tip (FT) in view is difficult. A robotic US system could potentially solve the complex US probe handling and could improve the surgeon's view of the intrauterine scene. In this paper, an automatic fetoscope tracking algorithm was developed and ensured safe interactions between robot and fragile tissues for FLP surgery. The US probe attached to the robot was controlled by a hybrid position-force control strategy. For evaluation, four fetoscope motion profiles were investigated on a custom-designed phantom. The system achieved a 78.87% visibility rate for the US image evaluation. The contact force was 0.99± 0.68 N during the tracking to ensure the patient's safety. The proposed system demonstrates the capability of accurate real-time fetoscope tracking.","PeriodicalId":276757,"journal":{"name":"2023 International Symposium on Medical Robotics (ISMR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131742465","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-19DOI: 10.1109/ISMR57123.2023.10130232
Aurélie Bonnefoy, Sabrina Otmani, N. Mansard, O. Stasse, G. Michon, B. Watier
This paper presents a method to model a human-exoskeleton interaction for patients suffering from spastic cerebral palsy. We base our work on the Clinical Gait Analysis performed on two 9-year-old twin sisters. The first sister has a pathology called spastic cerebral palsy, while the second sister is non-pathological without any impairment. This paper aims at determining the proportion of the walking efforts that can be supported by an exoskeleton in order to allow a pathological child gait to converge toward a non-pathological one. Based on experimental data, a model of the pathological gait reconstructed using an optimal estimation. The model relies on mechanical differential equations of motion. The interaction between the human and the exoskeleton is then modelled using optimal control, while modelling ground contacts. Results show that the human produced joint torques are within the possible torque range of the child with cerebral palsy, which justifies the use of an exoskeleton to correct a pathological gait. The code for running the simulations is provided in open source.
{"title":"Modelisation of a Human-Exoskeleton Interaction for Cerebral Palsy","authors":"Aurélie Bonnefoy, Sabrina Otmani, N. Mansard, O. Stasse, G. Michon, B. Watier","doi":"10.1109/ISMR57123.2023.10130232","DOIUrl":"https://doi.org/10.1109/ISMR57123.2023.10130232","url":null,"abstract":"This paper presents a method to model a human-exoskeleton interaction for patients suffering from spastic cerebral palsy. We base our work on the Clinical Gait Analysis performed on two 9-year-old twin sisters. The first sister has a pathology called spastic cerebral palsy, while the second sister is non-pathological without any impairment. This paper aims at determining the proportion of the walking efforts that can be supported by an exoskeleton in order to allow a pathological child gait to converge toward a non-pathological one. Based on experimental data, a model of the pathological gait reconstructed using an optimal estimation. The model relies on mechanical differential equations of motion. The interaction between the human and the exoskeleton is then modelled using optimal control, while modelling ground contacts. Results show that the human produced joint torques are within the possible torque range of the child with cerebral palsy, which justifies the use of an exoskeleton to correct a pathological gait. The code for running the simulations is provided in open source.","PeriodicalId":276757,"journal":{"name":"2023 International Symposium on Medical Robotics (ISMR)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132728655","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}