Stephen Thompson, Miguel P. Xochicale, T. Dowrick, M. Clarkson
SciKit-Surgery provides open source libraries to support research and translation of applications for augmented reality in surgery [1]. This paper discusses recent de- velopments in SciKit-Surgery and case studies using SciKit-SurgeryBARD to support research into visuali- sation and user interface design for augmented reality in surgery [2], [3]. The availability of high quality software tools for re- search and translation is a key enabler for scientific progress. Research into surgical robotics, image guided surgery, and augmented reality for surgery brings to- gether many disciplines and depends on a strong en- gineering base to provide the tools that researchers need (e.g., hardware interfaces, data management, data processing, visualisation, and user interfaces). SciKit- Surgery was conceived as a more accessible replacement for existing toolkits written predominantly in C++. Ex- perience has taught us that whilst implementations in C++ could be robust and offer optimised performance, the need to learn the language and the difficulties of maintaining cross platform compilation presented a higher a barrier of entry for most researchers. Whilst research software can be initially developed using short term research grants, the longer term sustainability of the software depends on other researchers being able to contribute to the software, both for maintenance and to introduce new features. For that to happen the software needs to be compact, written in a language that be easily interpreted by humans, and well documented. We conceived SciKit-Surgery to be a set of individual Python modules that could be used on their own by researchers to explore a specific topic or assembled into high quality applications that could be rapidly deployed to clinic to enable translation from bench to bedside.
{"title":"Using SciKit-Surgery for Augmented Reality in Surgery","authors":"Stephen Thompson, Miguel P. Xochicale, T. Dowrick, M. Clarkson","doi":"10.31256/hsmr2023.22","DOIUrl":"https://doi.org/10.31256/hsmr2023.22","url":null,"abstract":"SciKit-Surgery provides open source libraries to support research and translation of applications for augmented reality in surgery [1]. This paper discusses recent de- velopments in SciKit-Surgery and case studies using SciKit-SurgeryBARD to support research into visuali- sation and user interface design for augmented reality in surgery [2], [3]. The availability of high quality software tools for re- search and translation is a key enabler for scientific progress. Research into surgical robotics, image guided surgery, and augmented reality for surgery brings to- gether many disciplines and depends on a strong en- gineering base to provide the tools that researchers need (e.g., hardware interfaces, data management, data processing, visualisation, and user interfaces). SciKit- Surgery was conceived as a more accessible replacement for existing toolkits written predominantly in C++. Ex- perience has taught us that whilst implementations in C++ could be robust and offer optimised performance, the need to learn the language and the difficulties of maintaining cross platform compilation presented a higher a barrier of entry for most researchers. Whilst research software can be initially developed using short term research grants, the longer term sustainability of the software depends on other researchers being able to contribute to the software, both for maintenance and to introduce new features. For that to happen the software needs to be compact, written in a language that be easily interpreted by humans, and well documented. We conceived SciKit-Surgery to be a set of individual Python modules that could be used on their own by researchers to explore a specific topic or assembled into high quality applications that could be rapidly deployed to clinic to enable translation from bench to bedside.","PeriodicalId":129686,"journal":{"name":"Proceedings of The 15th Hamlyn Symposium on Medical Robotics 2023","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116845907","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Concentric tube robots (CTR) and notched wrists are two technologies that have been investigated for medical applications. CTRs consist of pre-curved super-elastic tubes that are nested concentrically, and are linearly translated and axially rotated with respect to one another for movement. Separately, notched wrists are tubular instruments that can achieve large bending angles via notches that are cut into the tube and shortening actua- tion cables that run along their length. These two robotic tools have been investigated independently, but very few studies have explored combining them [1], [2]. This paper compares the workspace and dexterity of a three-tube CTR with two hybrid CTR and notch-cut wrist systems in simulation. These metrics are key in measuring the performance of robots, and particularly so for surgical robots. To perform complicated surgical tasks, it is critical to increase the number of spatial points that the robot can reach, and the number of obtainable orientations at these points.
{"title":"A Comparison of the Workspace and Dexterity of Hybrid Concentric Tube Robot and Notched Wrist Systems","authors":"Paul H. Kang, R. Nguyen, T. Looi","doi":"10.31256/hsmr2023.15","DOIUrl":"https://doi.org/10.31256/hsmr2023.15","url":null,"abstract":"Concentric tube robots (CTR) and notched wrists are two technologies that have been investigated for medical applications. CTRs consist of pre-curved super-elastic tubes that are nested concentrically, and are linearly translated and axially rotated with respect to one another for movement. Separately, notched wrists are tubular instruments that can achieve large bending angles via notches that are cut into the tube and shortening actua- tion cables that run along their length. These two robotic tools have been investigated independently, but very few studies have explored combining them [1], [2]. This paper compares the workspace and dexterity of a three-tube CTR with two hybrid CTR and notch-cut wrist systems in simulation. These metrics are key in measuring the performance of robots, and particularly so for surgical robots. To perform complicated surgical tasks, it is critical to increase the number of spatial points that the robot can reach, and the number of obtainable orientations at these points.","PeriodicalId":129686,"journal":{"name":"Proceedings of The 15th Hamlyn Symposium on Medical Robotics 2023","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117040869","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Although modern features in surgical robots such as 3D vision, “wrist” instruments, tremor abolition, and motion scaling have greatly enhanced surgical dexterity, technical skill is a major challenge for surgeons and trainees. Surgeons who get constructive and real-time feedback can make more significant improvement in their performance [1]. Recent years, the research in automated surgical skill assessment has made considerable progress, however, the majority of surgical evaluation methods are post- operation analysis. Few studies introduced real-time sur- gical performance evaluations, for example, using Con- volutional Neural Network [2], Codebook and Support Vector Machine [3], and Convolutional Neural Network - Long Short Term Memory [4]. One common limitation of these studies is data leakage during training which results in a higher estimate of model performance. Moreover, these studies cannot depict an intuitive repre- sentation of what actually differentiates expertise levels. In this study, we introduce a method to extract the unusual movements which are rarely seen in Experts and identify the types of the unusual movements. We believe detecting and correcting the unusual movements is an important aspect for surgeons to improve their skills.
{"title":"Extracting Unusual Movements during Robotic Surgical Tasks: A Semi-Supervised Learning Approach","authors":"Y. Zheng, Ann Majewicz-Fey","doi":"10.31256/hsmr2023.32","DOIUrl":"https://doi.org/10.31256/hsmr2023.32","url":null,"abstract":"Although modern features in surgical robots such as 3D vision, “wrist” instruments, tremor abolition, and motion scaling have greatly enhanced surgical dexterity, technical skill is a major challenge for surgeons and trainees. Surgeons who get constructive and real-time feedback can make more significant improvement in their performance [1]. Recent years, the research in automated surgical skill assessment has made considerable progress, however, the majority of surgical evaluation methods are post- operation analysis. Few studies introduced real-time sur- gical performance evaluations, for example, using Con- volutional Neural Network [2], Codebook and Support Vector Machine [3], and Convolutional Neural Network - Long Short Term Memory [4]. One common limitation of these studies is data leakage during training which results in a higher estimate of model performance. Moreover, these studies cannot depict an intuitive repre- sentation of what actually differentiates expertise levels. In this study, we introduce a method to extract the unusual movements which are rarely seen in Experts and identify the types of the unusual movements. We believe detecting and correcting the unusual movements is an important aspect for surgeons to improve their skills.","PeriodicalId":129686,"journal":{"name":"Proceedings of The 15th Hamlyn Symposium on Medical Robotics 2023","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128656262","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Piper C. Cannon, Shaan A Setia, N. Kavoussi, S. Herrell, Robert Robert Webster
Using an image guidance system constructed over the past several years [1], [2] we have recently collected our first in vivo human pilot study data on the use of the da Vinci for image guided partial nephrectomy [3]. Others have also previously created da Vinci image guidance systems (IGS) for various organs, using a variety of approaches [4]. Our system uses touch-based registration, in which the da Vinci’s tool tips lightly trace over the tissue surface and collect a point cloud. This point cloud is then registered to segmented medical images. We provide the surgeon a picture-in-picture 3D Slicer display, in which animated da Vinci tools move exactly as the real tools do in the endoscope view (see [2] for illustrations of this). The purpose of this paper is to discuss recent in vivo experiences and how they are informing future research on robotic IGS systems, particularly the use of ultrasound.
{"title":"How Insights from In Vivo Human Pilot Studies with da Vinci Image Guidance are Informing Next Generation System Design","authors":"Piper C. Cannon, Shaan A Setia, N. Kavoussi, S. Herrell, Robert Robert Webster","doi":"10.31256/hsmr2023.5","DOIUrl":"https://doi.org/10.31256/hsmr2023.5","url":null,"abstract":"Using an image guidance system constructed over the past several years [1], [2] we have recently collected our first in vivo human pilot study data on the use of the da Vinci for image guided partial nephrectomy [3]. Others have also previously created da Vinci image guidance systems (IGS) for various organs, using a variety of approaches [4]. Our system uses touch-based registration, in which the da Vinci’s tool tips lightly trace over the tissue surface and collect a point cloud. This point cloud is then registered to segmented medical images. We provide the surgeon a picture-in-picture 3D Slicer display, in which animated da Vinci tools move exactly as the real tools do in the endoscope view (see [2] for illustrations of this). The purpose of this paper is to discuss recent in vivo experiences and how they are informing future research on robotic IGS systems, particularly the use of ultrasound.","PeriodicalId":129686,"journal":{"name":"Proceedings of The 15th Hamlyn Symposium on Medical Robotics 2023","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114860420","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mitral valve regurgitation is the most common valvular disease, affecting 10% of the population over 75 years old [1]. Current standard of care diagnostic imaging for mitral valve procedures primarily consists of trans- esophageal echocardiography (TEE) as it provides a clear view of the mitral valve leaflets and surrounding tissue. Heart simulator technology has been adopted widely by both industry for evaluation of technolo- gies for imaging heart valves [2], and academia for the assessment of modelled heart valves [3]. Recently, developments have been made on a workflow to cre- ate 3D, patient-specific valve models directly from trans-esophageal echocardiography (TEE) images. When viewed dynamically using TEE within a pulse duplicator simulator, it has been demonstrated that these models result in pathology-specific TEE images similar to those acquired from the patient’s valves in-vivo [4]. However, producing a mesh model of the valve geometry from TEE imaging remains a challenge. Previously, produc- ing a valve model included a labor intensive series of steps including manual leaflet segmentation, and computer-aided design (CAD) manipulation to derive a 3D printable mold from a raw segmentation. Our objective is to automate the workflow and reduce the labor requirements for producing these valve models. To address the leaflet segmentation problem, we developed DeepMitral, a fully automatic valve leaflet segmentation tool. Following leaflet segmentation, we have developed tools for automatically deriving mesh models that can easily be integrated into a mold base.
{"title":"From 4D Transesophageal Echocardiography to Patient Specific Mitral Valve Models","authors":"Patrick K. Carnahan, E. Chen, Terry M. Peters","doi":"10.31256/hsmr2023.77","DOIUrl":"https://doi.org/10.31256/hsmr2023.77","url":null,"abstract":"Mitral valve regurgitation is the most common valvular disease, affecting 10% of the population over 75 years old [1]. Current standard of care diagnostic imaging for mitral valve procedures primarily consists of trans- esophageal echocardiography (TEE) as it provides a clear view of the mitral valve leaflets and surrounding tissue. Heart simulator technology has been adopted widely by both industry for evaluation of technolo- gies for imaging heart valves [2], and academia for the assessment of modelled heart valves [3]. Recently, developments have been made on a workflow to cre- ate 3D, patient-specific valve models directly from trans-esophageal echocardiography (TEE) images. When viewed dynamically using TEE within a pulse duplicator simulator, it has been demonstrated that these models result in pathology-specific TEE images similar to those acquired from the patient’s valves in-vivo [4]. However, producing a mesh model of the valve geometry from TEE imaging remains a challenge. Previously, produc- ing a valve model included a labor intensive series of steps including manual leaflet segmentation, and computer-aided design (CAD) manipulation to derive a 3D printable mold from a raw segmentation. Our objective is to automate the workflow and reduce the labor requirements for producing these valve models. To address the leaflet segmentation problem, we developed DeepMitral, a fully automatic valve leaflet segmentation tool. Following leaflet segmentation, we have developed tools for automatically deriving mesh models that can easily be integrated into a mold base.","PeriodicalId":129686,"journal":{"name":"Proceedings of The 15th Hamlyn Symposium on Medical Robotics 2023","volume":"176 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132388914","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
V. Penza, Andrea Santangelo, D. Paladini, L. Mattos
Obstetric ultrasound (US) is widely used in prenatal diagnosis to monitor the development and growth of the embryo or fetus and to detect congenital anomalies. The benefits offered by US in terms of timely diagnosis are extensive, but the quality of the examination is closely linked to the experience of the clinician [1]. Although proper training and assessment of acquired skills are considered of paramount importance in order to ensure a quality exam, there is no European standard establishing a training path with an objective assessment of operator’s capabilities. In fact, the experience is often evaluated merely on the basis of the number of clinical tests performed. However, an operator with daily US examination experience may not perform as well as a true expert due to inadequate training [2]. Many studies have been conducted to assess hand ges- ture with the aim of establishing metrics to discriminate between experts and novice, which can also be used to study a specific training and objectively evaluate the acquired skills [3][4]. Inspired by these works, hand movement was also studied for fetal US on phantom [5] or in a virtual reality simulated scenario [6]. This paper presents a novel study for the objective assessment of the operator’s experience in obstetric US examinations based on hand gestures and forces applied with the US probe on the abdomen, during real obstetric US examinations. A Data Recording System was designed to collect this information during US examinations performed by clinician with 3 different levels of experience (expert, intermediate and novice) on pregnant women at the 2nd trimester. The results presented here focus on assessing a set of metrics with the potential to provide an objective discrimination of the operator’s level of experience. With respect to previous works, the novelty relies on validating the state-of-the-art discriminating metrics in a real scenario. Furthermore, this work includes as a novelty the measurement of the forces applied on the abdomen, which seems to be very relevant in the clinical practice. This study was approved by the Regional Ethics Commit- tee of Liguria (Italy) with the protocol number 379/2022 - DB id 12369.
产科超声(US)广泛用于产前诊断,以监测胚胎或胎儿的发育和生长,并发现先天性异常。美国在及时诊断方面提供的好处是广泛的,但检查的质量与临床医生的经验密切相关[1]。虽然对获得的技能进行适当的培训和评估被认为是确保质量考试的最重要因素,但没有欧洲标准建立对操作员能力进行客观评估的培训路径。事实上,对经验的评价往往仅仅基于所进行的临床试验的数量。然而,由于培训不足,具有日常美国考试经验的操作员可能表现不如真正的专家[2]。已经进行了许多研究来评估手部技能,目的是建立区分专家和新手的指标,这些指标也可以用于研究特定的训练并客观地评估获得的技能[3][4]。受这些作品的启发,我们也研究了胎儿在幻影上的手部运动[5]或在虚拟现实模拟场景下的手部运动[6]。本文提出了一项新的研究,客观评估操作员的经验,在产科超声检查的基础上,手势和力量与超声探头应用于腹部,在真正的产科超声检查。设计了一个数据记录系统,在美国由具有3个不同经验水平(专家、中级和新手)的临床医生对妊娠中期妇女进行检查时收集这些信息。本文给出的结果侧重于评估一组指标,这些指标有可能提供对操作人员经验水平的客观区分。相对于之前的作品,新颖性依赖于在真实场景中验证最先进的判别指标。此外,这项工作包括作为一个新颖的测量力施加在腹部,这似乎是非常相关的临床实践。本研究由利古里亚(意大利)区域伦理委员会批准,协议号379/2022 - DB id 12369。
{"title":"Computer-based assessment of the operator’s experience in obstetric ultrasound examination based on hand movements and applied forces","authors":"V. Penza, Andrea Santangelo, D. Paladini, L. Mattos","doi":"10.31256/hsmr2023.28","DOIUrl":"https://doi.org/10.31256/hsmr2023.28","url":null,"abstract":"Obstetric ultrasound (US) is widely used in prenatal diagnosis to monitor the development and growth of the embryo or fetus and to detect congenital anomalies. The benefits offered by US in terms of timely diagnosis are extensive, but the quality of the examination is closely linked to the experience of the clinician [1]. Although proper training and assessment of acquired skills are considered of paramount importance in order to ensure a quality exam, there is no European standard establishing a training path with an objective assessment of operator’s capabilities. In fact, the experience is often evaluated merely on the basis of the number of clinical tests performed. However, an operator with daily US examination experience may not perform as well as a true expert due to inadequate training [2]. Many studies have been conducted to assess hand ges- ture with the aim of establishing metrics to discriminate between experts and novice, which can also be used to study a specific training and objectively evaluate the acquired skills [3][4]. Inspired by these works, hand movement was also studied for fetal US on phantom [5] or in a virtual reality simulated scenario [6]. This paper presents a novel study for the objective assessment of the operator’s experience in obstetric US examinations based on hand gestures and forces applied with the US probe on the abdomen, during real obstetric US examinations. A Data Recording System was designed to collect this information during US examinations performed by clinician with 3 different levels of experience (expert, intermediate and novice) on pregnant women at the 2nd trimester. The results presented here focus on assessing a set of metrics with the potential to provide an objective discrimination of the operator’s level of experience. With respect to previous works, the novelty relies on validating the state-of-the-art discriminating metrics in a real scenario. Furthermore, this work includes as a novelty the measurement of the forces applied on the abdomen, which seems to be very relevant in the clinical practice. This study was approved by the Regional Ethics Commit- tee of Liguria (Italy) with the protocol number 379/2022 - DB id 12369.","PeriodicalId":129686,"journal":{"name":"Proceedings of The 15th Hamlyn Symposium on Medical Robotics 2023","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116339983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Korn Borvorntanajanya, S. Treratanakulchai, Enrico Franco, F. Rodriguez y Baena
Recently, the use of eversion-based movement in robotics has gained popularity. Eversion mechanisms enable objects to turn inside out, similar to flipping a sock, allowing them to move through narrow spaces without making direct force on the environment. This type of movement can be used for medical devices such as catheters and endoscopes [1]. For instance, an autonomous colonoscope must navigate through tight and curved spaces in the colon. Eversion movement is a suitable solution that allows the colonoscope to move more safely. Furthermore, the implementation of feedback control enhances the accuracy and efficiency of the examination process. The total length of the eversion portion (𝐿) is typically controlled by a reel mechanism [2]. The reel mechanism consists of a spool wrapped tightly with plastic tubing connected to a motor. The system calculates the total length by counting the number of motor rotations. However, the diameter of the reel mechanism varies depending on the layers of material around the roller, making it difficult to calculate the total length from the standard roller model [3], [4]. This paper introduces a method for calculating the total length of the everted portion based on area. The model was validated using an optical tracking camera and compared with four other methods for calculating the total length in roller mechanisms.
{"title":"Area-Based Total Length Estimation for Position Control in Soft Growing Robots","authors":"Korn Borvorntanajanya, S. Treratanakulchai, Enrico Franco, F. Rodriguez y Baena","doi":"10.31256/hsmr2023.60","DOIUrl":"https://doi.org/10.31256/hsmr2023.60","url":null,"abstract":"Recently, the use of eversion-based movement in robotics has gained popularity. Eversion mechanisms enable objects to turn inside out, similar to flipping a sock, allowing them to move through narrow spaces without making direct force on the environment. This type of movement can be used for medical devices such as catheters and endoscopes [1]. For instance, an autonomous colonoscope must navigate through tight and curved spaces in the colon. Eversion movement is a suitable solution that allows the colonoscope to move more safely. Furthermore, the implementation of feedback control enhances the accuracy and efficiency of the examination process. The total length of the eversion portion (𝐿) is typically controlled by a reel mechanism [2]. The reel mechanism consists of a spool wrapped tightly with plastic tubing connected to a motor. The system calculates the total length by counting the number of motor rotations. However, the diameter of the reel mechanism varies depending on the layers of material around the roller, making it difficult to calculate the total length from the standard roller model [3], [4]. This paper introduces a method for calculating the total length of the everted portion based on area. The model was validated using an optical tracking camera and compared with four other methods for calculating the total length in roller mechanisms.","PeriodicalId":129686,"journal":{"name":"Proceedings of The 15th Hamlyn Symposium on Medical Robotics 2023","volume":"130 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129820822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Over 1 million core-needle breast biopsies are performed every year in the US alone [1], while gastrointestinal and prostate biopsies are estimated in similar numbers. The cost of core-needle breast biopsies ranges between $500 for manual procedures to $6,000 for image-guided procedures [2]. A retrospective study indicated that approximately 2.5% of breast biopsies fail [3]. Needle bending has been identified as a significant cause of error in biopsies and is particularly likely to occur at the insertion stage [2]. The associated risks include: i) biopsy of the wrong site leading to misdiagnosis; ii) puncture of sensitive areas in proximity of the insertion path; iii) repeated insertions, thus longer procedure du- ration and increased patient discomfort. Biopsy needles are also prone to buckling, which can damage the needle permanently. Common techniques for correcting needle bending in clinical settings include repeating the inser- tion (which can be time-consuming) or using a needle guide (which reduces the maximum insertion depth). In research, axial rotation is typically employed for steering bevel-tip needles, but it is less effective for needles with an axial-symmetric tip [4]. Additionally, straight insertions require continuous axial rotation, which can damage soft tissue due to the spinning of the bevel tip [5]. Alternative approaches employ steerable needles, which are not yet part of clinical practice [6]. We have developed a mechanical device that detects needle bending as soon as it occurs and that immediately reduces the insertion force thus helping to avoid deep insertions with deflected needles and the associated risks. Unlike existing solutions, our design does not require actuators or sensors hence it can be made MRI- safe, sterilisable or disposable. Finally, our device can be used with a variety of standard needles, including multi-bevel needles (e.g. diamond tip or conical tip).
{"title":"DEBI: a new mechanical device for safer needle insertions","authors":"Ayhan Aktas, Enrico Franco","doi":"10.31256/hsmr2023.18","DOIUrl":"https://doi.org/10.31256/hsmr2023.18","url":null,"abstract":"Over 1 million core-needle breast biopsies are performed every year in the US alone [1], while gastrointestinal and prostate biopsies are estimated in similar numbers. The cost of core-needle breast biopsies ranges between $500 for manual procedures to $6,000 for image-guided procedures [2]. A retrospective study indicated that approximately 2.5% of breast biopsies fail [3]. Needle bending has been identified as a significant cause of error in biopsies and is particularly likely to occur at the insertion stage [2]. The associated risks include: i) biopsy of the wrong site leading to misdiagnosis; ii) puncture of sensitive areas in proximity of the insertion path; iii) repeated insertions, thus longer procedure du- ration and increased patient discomfort. Biopsy needles are also prone to buckling, which can damage the needle permanently. Common techniques for correcting needle bending in clinical settings include repeating the inser- tion (which can be time-consuming) or using a needle guide (which reduces the maximum insertion depth). In research, axial rotation is typically employed for steering bevel-tip needles, but it is less effective for needles with an axial-symmetric tip [4]. Additionally, straight insertions require continuous axial rotation, which can damage soft tissue due to the spinning of the bevel tip [5]. Alternative approaches employ steerable needles, which are not yet part of clinical practice [6]. We have developed a mechanical device that detects needle bending as soon as it occurs and that immediately reduces the insertion force thus helping to avoid deep insertions with deflected needles and the associated risks. Unlike existing solutions, our design does not require actuators or sensors hence it can be made MRI- safe, sterilisable or disposable. Finally, our device can be used with a variety of standard needles, including multi-bevel needles (e.g. diamond tip or conical tip).","PeriodicalId":129686,"journal":{"name":"Proceedings of The 15th Hamlyn Symposium on Medical Robotics 2023","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125577797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tony Qin, Peter Connor, K. Dang, R. Alterovitz, R. Webster, Caleb Rucker
Colorectal cancer is a pervasive disease: an estimated 4.6% of men and 4.2% of women will suffer from it in their lifetime [1]. Precancerous polyps can be small (<5 mm) medium (6-9 mm) or large (>10 mm) [2]. Small polyps are most frequent, but polyps too large for immediate endoscopic removal during screening occur 135,000 times per year in the US alone [1]. There are two primary options for removing these polyps: endoscopic removal or partial colectomy. Endoscopic procedures, such as endoscopic submucosal dissection (ESD), are less invasive and reduce the risk of infection, reoccurence, and other adverse events [3]. Despite this, approximately 50,000 patients each year undergo partial colectomies for polyps which could have been removed endoscopically [4]. A primary obstacle to wider use of endoscopic pro- cedures is how challenging they are for physicans to perform, due to the limited dexterity of existing trans- endoscopic tools [5]. Currently tools come straight out the tip of the colonscope and moving them requires moving the tip of the colonoscope [6]. To enable tools to move independent of the colonoscope, we propose an endoscopically deployable, flexible robotic system, as shown in Fig. 1. This system deploys a flexible robotic arm through each channel of a standard 2-channel colonoscope. Each arm is composed of a setup sheath followed by a steerable sheath, with each sheath built using a concentric push-pull robot (CPPR) [7]. Each arm has a hollow central lumen through which tools (e.g. forceps, electrosurgery probes, etc.) can be passed. This design adds dexterity and provides the physician with two independent manipulators, with the goal of making ESD easier to perform.
{"title":"Computational Analysis of Design Parameters for a Bimanual Concentric Push-Pull Robot","authors":"Tony Qin, Peter Connor, K. Dang, R. Alterovitz, R. Webster, Caleb Rucker","doi":"10.31256/hsmr2023.10","DOIUrl":"https://doi.org/10.31256/hsmr2023.10","url":null,"abstract":"Colorectal cancer is a pervasive disease: an estimated 4.6% of men and 4.2% of women will suffer from it in their lifetime [1]. Precancerous polyps can be small (<5 mm) medium (6-9 mm) or large (>10 mm) [2]. Small polyps are most frequent, but polyps too large for immediate endoscopic removal during screening occur 135,000 times per year in the US alone [1]. There are two primary options for removing these polyps: endoscopic removal or partial colectomy. Endoscopic procedures, such as endoscopic submucosal dissection (ESD), are less invasive and reduce the risk of infection, reoccurence, and other adverse events [3]. Despite this, approximately 50,000 patients each year undergo partial colectomies for polyps which could have been removed endoscopically [4]. A primary obstacle to wider use of endoscopic pro- cedures is how challenging they are for physicans to perform, due to the limited dexterity of existing trans- endoscopic tools [5]. Currently tools come straight out the tip of the colonscope and moving them requires moving the tip of the colonoscope [6]. To enable tools to move independent of the colonoscope, we propose an endoscopically deployable, flexible robotic system, as shown in Fig. 1. This system deploys a flexible robotic arm through each channel of a standard 2-channel colonoscope. Each arm is composed of a setup sheath followed by a steerable sheath, with each sheath built using a concentric push-pull robot (CPPR) [7]. Each arm has a hollow central lumen through which tools (e.g. forceps, electrosurgery probes, etc.) can be passed. This design adds dexterity and provides the physician with two independent manipulators, with the goal of making ESD easier to perform.","PeriodicalId":129686,"journal":{"name":"Proceedings of The 15th Hamlyn Symposium on Medical Robotics 2023","volume":"79 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122187244","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Souipas, Anh M Nguyen, Stephen Laws, Brian Davies, F. Rodriguez y Baena
Image-based detection and localisation of surgical tools has received significant attention due to the development of rele- vant deep learning techniques, along with recent upgrades in computational capabilities. Although not as accurate as optical trackers [1], image-based methods are easy to deploy, and require no surgical tool redesign to accommodate trackable markers, which could be beneficial when it comes to cheaper, “off-the-shelf” tools, such as scalpels and scissors. In the operating room however, these techniques suffer from drawbacks due to the presence of highly reflective or featureless materials, but also occlusions, such as smoke and blood. Furthermore, networks often utilise tool 3D models (e.g. CAD data), not only for the purpose of point correspon- dence, but also for pose regression. The aforementioned “off- the-shelf” tools are scarcely accompanied by such prior 3D structure data. Ultimately, in addition to the above hindrances, estimating 3D pose using a monocular camera setup, poses a challenge in itself due to the lack of depth information. Con- sidering these limitations, we present SimPS-Net, a network capable of both detection and 3D pose estimation of standard surgical tools using a single RGB camera.
{"title":"SimPS-Net: Simultaneous Pose & Segmentation Network of Surgical Tools","authors":"S. Souipas, Anh M Nguyen, Stephen Laws, Brian Davies, F. Rodriguez y Baena","doi":"10.31256/hsmr2023.36","DOIUrl":"https://doi.org/10.31256/hsmr2023.36","url":null,"abstract":"Image-based detection and localisation of surgical tools has received significant attention due to the development of rele- vant deep learning techniques, along with recent upgrades in computational capabilities. Although not as accurate as optical trackers [1], image-based methods are easy to deploy, and require no surgical tool redesign to accommodate trackable markers, which could be beneficial when it comes to cheaper, “off-the-shelf” tools, such as scalpels and scissors. In the operating room however, these techniques suffer from drawbacks due to the presence of highly reflective or featureless materials, but also occlusions, such as smoke and blood. Furthermore, networks often utilise tool 3D models (e.g. CAD data), not only for the purpose of point correspon- dence, but also for pose regression. The aforementioned “off- the-shelf” tools are scarcely accompanied by such prior 3D structure data. Ultimately, in addition to the above hindrances, estimating 3D pose using a monocular camera setup, poses a challenge in itself due to the lack of depth information. Con- sidering these limitations, we present SimPS-Net, a network capable of both detection and 3D pose estimation of standard surgical tools using a single RGB camera.","PeriodicalId":129686,"journal":{"name":"Proceedings of The 15th Hamlyn Symposium on Medical Robotics 2023","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122335636","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}