Pub Date : 2024-01-26DOI: 10.1142/s2424905x24400014
Regine Buter, John J. Han, Ayberk Acar, Yizhou Li, Paola Ruiz Puentes, R. Soberanis-Mukul, Iris Gupta, Joyraj Bhowmick, Ahmed Ghazi, Andreas Maier, Mathias Unberath, Jie Ying Wu
{"title":"Improving the Temporal Accuracy of Eye Gaze Tracking for the da Vinci Surgical System Through Automatic Detection of Decalibration Events and Recalibration","authors":"Regine Buter, John J. Han, Ayberk Acar, Yizhou Li, Paola Ruiz Puentes, R. Soberanis-Mukul, Iris Gupta, Joyraj Bhowmick, Ahmed Ghazi, Andreas Maier, Mathias Unberath, Jie Ying Wu","doi":"10.1142/s2424905x24400014","DOIUrl":"https://doi.org/10.1142/s2424905x24400014","url":null,"abstract":"","PeriodicalId":73821,"journal":{"name":"Journal of medical robotics research","volume":"83 5","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139593590","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-19DOI: 10.1142/s2424905x23920010
Braden P. Murphy, Manuel Retana, Farshid Alambeigi
{"title":"Erratum: A Surgical Robotic Framework for Safe and Autonomous Data-Driven Learning and Manipulation of an Unknown Deformable Tissue with an Integrated Critical Space","authors":"Braden P. Murphy, Manuel Retana, Farshid Alambeigi","doi":"10.1142/s2424905x23920010","DOIUrl":"https://doi.org/10.1142/s2424905x23920010","url":null,"abstract":"","PeriodicalId":73821,"journal":{"name":"Journal of medical robotics research","volume":"34 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139613277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-15DOI: 10.1142/s2424905x23400093
Yiwei Jiang, Haoying Zhou, Gregory S. Fischer
{"title":"Development and Evaluation of a Markerless 6 DOF Pose Tracking Method for a Suture Needle from a Robotic Endoscope","authors":"Yiwei Jiang, Haoying Zhou, Gregory S. Fischer","doi":"10.1142/s2424905x23400093","DOIUrl":"https://doi.org/10.1142/s2424905x23400093","url":null,"abstract":"","PeriodicalId":73821,"journal":{"name":"Journal of medical robotics research","volume":"4 10","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139000339","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-15DOI: 10.1142/s2424905x23400081
Leon Schaeffer, David Herrmann, Thomas Schratzenstaller, Sebastian Dendorfer, Valter Bohm
{"title":"Preliminary theoretical considerations on the stiffness characteristics of a tensegrity joint for the use in dynamic orthoses","authors":"Leon Schaeffer, David Herrmann, Thomas Schratzenstaller, Sebastian Dendorfer, Valter Bohm","doi":"10.1142/s2424905x23400081","DOIUrl":"https://doi.org/10.1142/s2424905x23400081","url":null,"abstract":"","PeriodicalId":73821,"journal":{"name":"Journal of medical robotics research","volume":"223 21","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138996928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-03DOI: 10.1142/s2424905x23500046
Dimitri A Lezcano, Yernar Zhetpissov, Alexandra Cheng, Jin Seob Kim, Iulian I Iordachita
Flexible needle insertion procedures are common for minimally-invasive surgeries for diagnosing and treating prostate cancer. Bevel-tip needles provide physicians the capability to steer the needle during long insertions to avoid vital anatomical structures in the patient and reduce post-operative patient discomfort. To provide needle placement feedback to the physician, sensors are embedded into needles for determining the real-time 3D shape of the needle during operation without needing to visualize the needle intra-operatively. Through expansive research in fiber optics, a plethora of bio-compatible, MRI-compatible, optical shape-sensors have been developed to provide real-time shape feedback, such as single-core and multicore fiber Bragg gratings. In this paper, we directly compare single-core fiber-based and multicore fiber-based needle shape-sensing through identically constructed, four-active area sensorized bevel-tip needles inserted into phantom and ex-vivo tissue on the same experimental platform. In this work, we found that for shape-sensing in phantom tissue, the two needles performed identically with a p-value of 0.164 > 0.05, but in ex-vivo real tissue, the single-core fiber sensorized needle significantly outperformed the multicore fiber configuration with a p-value of 0.0005 < 0.05. This paper also presents the experimental platform and method for directly comparing these optical shape sensors for the needle shape-sensing task, as well as provides direction, insight and required considerations for future work in constructively optimizing sensorized needles.
{"title":"Optical Fiber-Based Needle Shape Sensing in Real Tissue: Single Core vs. Multicore Approaches","authors":"Dimitri A Lezcano, Yernar Zhetpissov, Alexandra Cheng, Jin Seob Kim, Iulian I Iordachita","doi":"10.1142/s2424905x23500046","DOIUrl":"https://doi.org/10.1142/s2424905x23500046","url":null,"abstract":"Flexible needle insertion procedures are common for minimally-invasive surgeries for diagnosing and treating prostate cancer. Bevel-tip needles provide physicians the capability to steer the needle during long insertions to avoid vital anatomical structures in the patient and reduce post-operative patient discomfort. To provide needle placement feedback to the physician, sensors are embedded into needles for determining the real-time 3D shape of the needle during operation without needing to visualize the needle intra-operatively. Through expansive research in fiber optics, a plethora of bio-compatible, MRI-compatible, optical shape-sensors have been developed to provide real-time shape feedback, such as single-core and multicore fiber Bragg gratings. In this paper, we directly compare single-core fiber-based and multicore fiber-based needle shape-sensing through identically constructed, four-active area sensorized bevel-tip needles inserted into phantom and ex-vivo tissue on the same experimental platform. In this work, we found that for shape-sensing in phantom tissue, the two needles performed identically with a p-value of 0.164 > 0.05, but in ex-vivo real tissue, the single-core fiber sensorized needle significantly outperformed the multicore fiber configuration with a p-value of 0.0005 < 0.05. This paper also presents the experimental platform and method for directly comparing these optical shape sensors for the needle shape-sensing task, as well as provides direction, insight and required considerations for future work in constructively optimizing sensorized needles.","PeriodicalId":73821,"journal":{"name":"Journal of medical robotics research","volume":"184 1‐6","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135775734","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-03DOI: 10.1142/s2424905x23400068
Karthik Dharmarajan, Will Panitch, Baiyu Shi, Huang Huang, Lawrence Yunliang Chen, Masoud Moghani, Qinxi Yu, Kush Hari, Thomas Low, Danyal Fer, Animesh Garg, Ken Goldberg
{"title":"Robot-Assisted Vascular Shunt Insertion with the dVRK Surgical Robot","authors":"Karthik Dharmarajan, Will Panitch, Baiyu Shi, Huang Huang, Lawrence Yunliang Chen, Masoud Moghani, Qinxi Yu, Kush Hari, Thomas Low, Danyal Fer, Animesh Garg, Ken Goldberg","doi":"10.1142/s2424905x23400068","DOIUrl":"https://doi.org/10.1142/s2424905x23400068","url":null,"abstract":"","PeriodicalId":73821,"journal":{"name":"Journal of medical robotics research","volume":"89 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135868971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-13DOI: 10.1142/s2424905x23400044
Yafei Ou, Sadra Zargarzadeh, Mahdi Tavakoli
Recent studies in surgical robotics have focused on automating common surgical subtasks such as grasping and manipulation using deep reinforcement learning (DRL). In this work, we consider surgical endoscopic camera control for object tracking – e.g., using the endoscopic camera manipulator (ECM) from the da Vinci Research Kit (dVRK) (Intuitive Inc., Sunnyvale, CA, USA) – as a typical surgical robot learning task. A DRL policy for controlling the robot joint space movements is first trained in a simulation environment and then continues the learning in the real world. To speed up training and avoid significant failures (in this case, losing view of the object), human interventions are incorporated into the training process and regular DRL is combined with generative adversarial imitation learning (GAIL) to encourage imitating human behaviors. Experiments show that an average reward of 159.8 can be achieved within 1,000 steps compared to only 121.8 without human interventions, and the view of the moving object is lost only twice during the training process out of 3 trials. These results show that human interventions can improve learning speed and significantly reduce failures during the training process.
{"title":"Robot Learning Incorporating Human Interventions in the Real World for Autonomous Surgical Endoscopic Camera Control","authors":"Yafei Ou, Sadra Zargarzadeh, Mahdi Tavakoli","doi":"10.1142/s2424905x23400044","DOIUrl":"https://doi.org/10.1142/s2424905x23400044","url":null,"abstract":"Recent studies in surgical robotics have focused on automating common surgical subtasks such as grasping and manipulation using deep reinforcement learning (DRL). In this work, we consider surgical endoscopic camera control for object tracking – e.g., using the endoscopic camera manipulator (ECM) from the da Vinci Research Kit (dVRK) (Intuitive Inc., Sunnyvale, CA, USA) – as a typical surgical robot learning task. A DRL policy for controlling the robot joint space movements is first trained in a simulation environment and then continues the learning in the real world. To speed up training and avoid significant failures (in this case, losing view of the object), human interventions are incorporated into the training process and regular DRL is combined with generative adversarial imitation learning (GAIL) to encourage imitating human behaviors. Experiments show that an average reward of 159.8 can be achieved within 1,000 steps compared to only 121.8 without human interventions, and the view of the moving object is lost only twice during the training process out of 3 trials. These results show that human interventions can improve learning speed and significantly reduce failures during the training process.","PeriodicalId":73821,"journal":{"name":"Journal of medical robotics research","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135918007","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-05-04DOI: 10.1142/s2424905x23500022
Ziheng Wang, Xi Liu, Conor Perreault, Anthony Jarc
Endoscopic video recordings are widely used in minimally invasive robot-assisted surgery, but when the endoscope is outside the patient’s body, it can capture irrelevant segments that may contain sensitive information. To address this, we propose a framework that accurately detects out-of-body frames in surgical videos by leveraging self-supervision with minimal data labels. We use a massive amount of unlabeled endoscopic images to learn meaningful representations in a self-supervised manner. Our approach, which involves pre-training on an auxiliary task and fine-tuning with limited supervision, outperforms previous methods for detecting out-of-body frames in surgical videos captured from da Vinci X and Xi surgical systems. The average F1 scores range from [Formula: see text] to [Formula: see text]. Remarkably, using only [Formula: see text] of the training labels, our approach still maintains an average F1 score performance above 97, outperforming fully-supervised methods with [Formula: see text] fewer labels. These results demonstrate the potential of our framework to facilitate the safe handling of surgical video recordings and enhance data privacy protection in minimally invasive surgery.
内窥镜录像广泛应用于微创机器人辅助手术,但当内窥镜在患者体外时,它可能会捕捉到可能包含敏感信息的不相关片段。为了解决这个问题,我们提出了一个框架,通过利用最小数据标签的自我监督来准确检测手术视频中的体外帧。我们使用大量未标记的内窥镜图像以自我监督的方式学习有意义的表示。F1的平均分数从[公式:见文]到[公式:见文]不等。值得注意的是,仅使用[Formula: see text]的训练标签,我们的方法仍然保持了平均F1分数在97以上的表现,优于使用[Formula: see text]较少标签的完全监督方法。这些结果证明了我们的框架在促进手术视频记录的安全处理和加强微创手术数据隐私保护方面的潜力。
{"title":"Automatic Detection of Out-of-body Frames in Surgical Videos for Privacy Protection Using Self-supervised Learning and Minimal Labels","authors":"Ziheng Wang, Xi Liu, Conor Perreault, Anthony Jarc","doi":"10.1142/s2424905x23500022","DOIUrl":"https://doi.org/10.1142/s2424905x23500022","url":null,"abstract":"Endoscopic video recordings are widely used in minimally invasive robot-assisted surgery, but when the endoscope is outside the patient’s body, it can capture irrelevant segments that may contain sensitive information. To address this, we propose a framework that accurately detects out-of-body frames in surgical videos by leveraging self-supervision with minimal data labels. We use a massive amount of unlabeled endoscopic images to learn meaningful representations in a self-supervised manner. Our approach, which involves pre-training on an auxiliary task and fine-tuning with limited supervision, outperforms previous methods for detecting out-of-body frames in surgical videos captured from da Vinci X and Xi surgical systems. The average F1 scores range from [Formula: see text] to [Formula: see text]. Remarkably, using only [Formula: see text] of the training labels, our approach still maintains an average F1 score performance above 97, outperforming fully-supervised methods with [Formula: see text] fewer labels. These results demonstrate the potential of our framework to facilitate the safe handling of surgical video recordings and enhance data privacy protection in minimally invasive surgery.","PeriodicalId":73821,"journal":{"name":"Journal of medical robotics research","volume":"330 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136265518","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-01Epub Date: 2023-03-18DOI: 10.1142/s2424905x23400020
Blayton Padasdao, Samuel Lafreniere, Mahsa Rabiei, Zolboo Batsaikhan, Bardia Konh
This work presents a robotic tool with bidirectional manipulation and control capabilities for targeted prostate biopsy interventions. Targeted prostate biopsy is an effective image-guided technique that results in detection of significant cancer with fewer cores and lower number of unnecessary biopsies compared to systematic biopsy. The robotic tool comprises of a compliant flexure section fabricated on a nitinol tube that enables bidirectional bending via actuation of two internal tendons, and a biopsy mechanism for extraction of tissue samples. The kinematic and static models of the compliant flexure section, as well as teleoperated and automated control of the robotic tool are presented and validated with experiments. It was shown that the controller can force the tip of the robotic tool to follow sinusoidal set-point positions with reasonable accuracy in air and inside a phantom tissue. Finally, the capability of the robotic tool to bend, reach targeted positions inside a phantom tissue, and extract a biopsy sample is evaluated.
{"title":"Teleoperated and Automated Control of a Robotic Tool for Targeted Prostate Biopsy.","authors":"Blayton Padasdao, Samuel Lafreniere, Mahsa Rabiei, Zolboo Batsaikhan, Bardia Konh","doi":"10.1142/s2424905x23400020","DOIUrl":"10.1142/s2424905x23400020","url":null,"abstract":"<p><p>This work presents a robotic tool with bidirectional manipulation and control capabilities for targeted prostate biopsy interventions. Targeted prostate biopsy is an effective image-guided technique that results in detection of significant cancer with fewer cores and lower number of unnecessary biopsies compared to systematic biopsy. The robotic tool comprises of a compliant flexure section fabricated on a nitinol tube that enables bidirectional bending via actuation of two internal tendons, and a biopsy mechanism for extraction of tissue samples. The kinematic and static models of the compliant flexure section, as well as teleoperated and automated control of the robotic tool are presented and validated with experiments. It was shown that the controller can force the tip of the robotic tool to follow sinusoidal set-point positions with reasonable accuracy in air and inside a phantom tissue. Finally, the capability of the robotic tool to bend, reach targeted positions inside a phantom tissue, and extract a biopsy sample is evaluated.</p>","PeriodicalId":73821,"journal":{"name":"Journal of medical robotics research","volume":"8 1-amp 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10513146/pdf/nihms-1878856.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41164457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-01DOI: 10.1142/s2424905x2299001x
{"title":"Author Index Volume 7 (2022)","authors":"","doi":"10.1142/s2424905x2299001x","DOIUrl":"https://doi.org/10.1142/s2424905x2299001x","url":null,"abstract":"","PeriodicalId":73821,"journal":{"name":"Journal of medical robotics research","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48445321","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}