首页 > 最新文献

Journal of medical robotics research最新文献

英文 中文
Preliminary theoretical considerations on the stiffness characteristics of a tensegrity joint for the use in dynamic orthoses 关于用于动态矫形器的张力合成关节刚度特性的初步理论考虑
Pub Date : 2023-12-15 DOI: 10.1142/s2424905x23400081
Leon Schaeffer, David Herrmann, Thomas Schratzenstaller, Sebastian Dendorfer, Valter Bohm
{"title":"Preliminary theoretical considerations on the stiffness characteristics of a tensegrity joint for the use in dynamic orthoses","authors":"Leon Schaeffer, David Herrmann, Thomas Schratzenstaller, Sebastian Dendorfer, Valter Bohm","doi":"10.1142/s2424905x23400081","DOIUrl":"https://doi.org/10.1142/s2424905x23400081","url":null,"abstract":"","PeriodicalId":73821,"journal":{"name":"Journal of medical robotics research","volume":"223 21","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138996928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optical Fiber-Based Needle Shape Sensing in Real Tissue: Single Core vs. Multicore Approaches 真实组织中基于光纤的针形传感:单芯与多芯方法
Pub Date : 2023-11-03 DOI: 10.1142/s2424905x23500046
Dimitri A Lezcano, Yernar Zhetpissov, Alexandra Cheng, Jin Seob Kim, Iulian I Iordachita
Flexible needle insertion procedures are common for minimally-invasive surgeries for diagnosing and treating prostate cancer. Bevel-tip needles provide physicians the capability to steer the needle during long insertions to avoid vital anatomical structures in the patient and reduce post-operative patient discomfort. To provide needle placement feedback to the physician, sensors are embedded into needles for determining the real-time 3D shape of the needle during operation without needing to visualize the needle intra-operatively. Through expansive research in fiber optics, a plethora of bio-compatible, MRI-compatible, optical shape-sensors have been developed to provide real-time shape feedback, such as single-core and multicore fiber Bragg gratings. In this paper, we directly compare single-core fiber-based and multicore fiber-based needle shape-sensing through identically constructed, four-active area sensorized bevel-tip needles inserted into phantom and ex-vivo tissue on the same experimental platform. In this work, we found that for shape-sensing in phantom tissue, the two needles performed identically with a p-value of 0.164 > 0.05, but in ex-vivo real tissue, the single-core fiber sensorized needle significantly outperformed the multicore fiber configuration with a p-value of 0.0005 < 0.05. This paper also presents the experimental platform and method for directly comparing these optical shape sensors for the needle shape-sensing task, as well as provides direction, insight and required considerations for future work in constructively optimizing sensorized needles.
在诊断和治疗前列腺癌的微创手术中,灵活的针头插入手术是很常见的。斜尖针头为医生提供了在长时间插入时控制针头的能力,以避免患者的重要解剖结构,减少术后患者的不适。为了向医生提供针头放置的反馈,传感器被嵌入针头中,以便在手术过程中确定针头的实时3D形状,而无需在术中可视化针头。通过对光纤的广泛研究,已经开发出大量生物兼容、核磁共振兼容的光学形状传感器,以提供实时形状反馈,如单核和多核光纤布拉格光栅。在本文中,我们直接比较了单芯纤维和多芯纤维的针头形状传感,通过相同的结构,四主动区域传感器的斜头针头插入到幻体和离体组织在同一个实验平台上。在本研究中,我们发现在模拟组织中,两根针的形状传感性能相同,p值为0.164 > 0.05,但在离体真实组织中,单芯纤维传感针的性能明显优于多芯纤维配置,p值为0.0005 < 0.05。本文还提出了直接比较这些光学形状传感器用于针形传感任务的实验平台和方法,并为今后建设性地优化传感针提供了方向、见解和需要考虑的问题。
{"title":"Optical Fiber-Based Needle Shape Sensing in Real Tissue: Single Core vs. Multicore Approaches","authors":"Dimitri A Lezcano, Yernar Zhetpissov, Alexandra Cheng, Jin Seob Kim, Iulian I Iordachita","doi":"10.1142/s2424905x23500046","DOIUrl":"https://doi.org/10.1142/s2424905x23500046","url":null,"abstract":"Flexible needle insertion procedures are common for minimally-invasive surgeries for diagnosing and treating prostate cancer. Bevel-tip needles provide physicians the capability to steer the needle during long insertions to avoid vital anatomical structures in the patient and reduce post-operative patient discomfort. To provide needle placement feedback to the physician, sensors are embedded into needles for determining the real-time 3D shape of the needle during operation without needing to visualize the needle intra-operatively. Through expansive research in fiber optics, a plethora of bio-compatible, MRI-compatible, optical shape-sensors have been developed to provide real-time shape feedback, such as single-core and multicore fiber Bragg gratings. In this paper, we directly compare single-core fiber-based and multicore fiber-based needle shape-sensing through identically constructed, four-active area sensorized bevel-tip needles inserted into phantom and ex-vivo tissue on the same experimental platform. In this work, we found that for shape-sensing in phantom tissue, the two needles performed identically with a p-value of 0.164 > 0.05, but in ex-vivo real tissue, the single-core fiber sensorized needle significantly outperformed the multicore fiber configuration with a p-value of 0.0005 < 0.05. This paper also presents the experimental platform and method for directly comparing these optical shape sensors for the needle shape-sensing task, as well as provides direction, insight and required considerations for future work in constructively optimizing sensorized needles.","PeriodicalId":73821,"journal":{"name":"Journal of medical robotics research","volume":"184 1‐6","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135775734","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robot-Assisted Vascular Shunt Insertion with the dVRK Surgical Robot 机器人辅助血管分流插入与dVRK手术机器人
Pub Date : 2023-11-03 DOI: 10.1142/s2424905x23400068
Karthik Dharmarajan, Will Panitch, Baiyu Shi, Huang Huang, Lawrence Yunliang Chen, Masoud Moghani, Qinxi Yu, Kush Hari, Thomas Low, Danyal Fer, Animesh Garg, Ken Goldberg
{"title":"Robot-Assisted Vascular Shunt Insertion with the dVRK Surgical Robot","authors":"Karthik Dharmarajan, Will Panitch, Baiyu Shi, Huang Huang, Lawrence Yunliang Chen, Masoud Moghani, Qinxi Yu, Kush Hari, Thomas Low, Danyal Fer, Animesh Garg, Ken Goldberg","doi":"10.1142/s2424905x23400068","DOIUrl":"https://doi.org/10.1142/s2424905x23400068","url":null,"abstract":"","PeriodicalId":73821,"journal":{"name":"Journal of medical robotics research","volume":"89 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135868971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robot Learning Incorporating Human Interventions in the Real World for Autonomous Surgical Endoscopic Camera Control 结合人类干预的机器人学习在现实世界中用于自主手术内窥镜相机控制
Pub Date : 2023-10-13 DOI: 10.1142/s2424905x23400044
Yafei Ou, Sadra Zargarzadeh, Mahdi Tavakoli
Recent studies in surgical robotics have focused on automating common surgical subtasks such as grasping and manipulation using deep reinforcement learning (DRL). In this work, we consider surgical endoscopic camera control for object tracking – e.g., using the endoscopic camera manipulator (ECM) from the da Vinci Research Kit (dVRK) (Intuitive Inc., Sunnyvale, CA, USA) – as a typical surgical robot learning task. A DRL policy for controlling the robot joint space movements is first trained in a simulation environment and then continues the learning in the real world. To speed up training and avoid significant failures (in this case, losing view of the object), human interventions are incorporated into the training process and regular DRL is combined with generative adversarial imitation learning (GAIL) to encourage imitating human behaviors. Experiments show that an average reward of 159.8 can be achieved within 1,000 steps compared to only 121.8 without human interventions, and the view of the moving object is lost only twice during the training process out of 3 trials. These results show that human interventions can improve learning speed and significantly reduce failures during the training process.
{"title":"Robot Learning Incorporating Human Interventions in the Real World for Autonomous Surgical Endoscopic Camera Control","authors":"Yafei Ou, Sadra Zargarzadeh, Mahdi Tavakoli","doi":"10.1142/s2424905x23400044","DOIUrl":"https://doi.org/10.1142/s2424905x23400044","url":null,"abstract":"Recent studies in surgical robotics have focused on automating common surgical subtasks such as grasping and manipulation using deep reinforcement learning (DRL). In this work, we consider surgical endoscopic camera control for object tracking – e.g., using the endoscopic camera manipulator (ECM) from the da Vinci Research Kit (dVRK) (Intuitive Inc., Sunnyvale, CA, USA) – as a typical surgical robot learning task. A DRL policy for controlling the robot joint space movements is first trained in a simulation environment and then continues the learning in the real world. To speed up training and avoid significant failures (in this case, losing view of the object), human interventions are incorporated into the training process and regular DRL is combined with generative adversarial imitation learning (GAIL) to encourage imitating human behaviors. Experiments show that an average reward of 159.8 can be achieved within 1,000 steps compared to only 121.8 without human interventions, and the view of the moving object is lost only twice during the training process out of 3 trials. These results show that human interventions can improve learning speed and significantly reduce failures during the training process.","PeriodicalId":73821,"journal":{"name":"Journal of medical robotics research","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135918007","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatic Detection of Out-of-body Frames in Surgical Videos for Privacy Protection Using Self-supervised Learning and Minimal Labels 基于自监督学习和最小标签的手术视频出体帧隐私保护自动检测
Pub Date : 2023-05-04 DOI: 10.1142/s2424905x23500022
Ziheng Wang, Xi Liu, Conor Perreault, Anthony Jarc
Endoscopic video recordings are widely used in minimally invasive robot-assisted surgery, but when the endoscope is outside the patient’s body, it can capture irrelevant segments that may contain sensitive information. To address this, we propose a framework that accurately detects out-of-body frames in surgical videos by leveraging self-supervision with minimal data labels. We use a massive amount of unlabeled endoscopic images to learn meaningful representations in a self-supervised manner. Our approach, which involves pre-training on an auxiliary task and fine-tuning with limited supervision, outperforms previous methods for detecting out-of-body frames in surgical videos captured from da Vinci X and Xi surgical systems. The average F1 scores range from [Formula: see text] to [Formula: see text]. Remarkably, using only [Formula: see text] of the training labels, our approach still maintains an average F1 score performance above 97, outperforming fully-supervised methods with [Formula: see text] fewer labels. These results demonstrate the potential of our framework to facilitate the safe handling of surgical video recordings and enhance data privacy protection in minimally invasive surgery.
内窥镜录像广泛应用于微创机器人辅助手术,但当内窥镜在患者体外时,它可能会捕捉到可能包含敏感信息的不相关片段。为了解决这个问题,我们提出了一个框架,通过利用最小数据标签的自我监督来准确检测手术视频中的体外帧。我们使用大量未标记的内窥镜图像以自我监督的方式学习有意义的表示。F1的平均分数从[公式:见文]到[公式:见文]不等。值得注意的是,仅使用[Formula: see text]的训练标签,我们的方法仍然保持了平均F1分数在97以上的表现,优于使用[Formula: see text]较少标签的完全监督方法。这些结果证明了我们的框架在促进手术视频记录的安全处理和加强微创手术数据隐私保护方面的潜力。
{"title":"Automatic Detection of Out-of-body Frames in Surgical Videos for Privacy Protection Using Self-supervised Learning and Minimal Labels","authors":"Ziheng Wang, Xi Liu, Conor Perreault, Anthony Jarc","doi":"10.1142/s2424905x23500022","DOIUrl":"https://doi.org/10.1142/s2424905x23500022","url":null,"abstract":"Endoscopic video recordings are widely used in minimally invasive robot-assisted surgery, but when the endoscope is outside the patient’s body, it can capture irrelevant segments that may contain sensitive information. To address this, we propose a framework that accurately detects out-of-body frames in surgical videos by leveraging self-supervision with minimal data labels. We use a massive amount of unlabeled endoscopic images to learn meaningful representations in a self-supervised manner. Our approach, which involves pre-training on an auxiliary task and fine-tuning with limited supervision, outperforms previous methods for detecting out-of-body frames in surgical videos captured from da Vinci X and Xi surgical systems. The average F1 scores range from [Formula: see text] to [Formula: see text]. Remarkably, using only [Formula: see text] of the training labels, our approach still maintains an average F1 score performance above 97, outperforming fully-supervised methods with [Formula: see text] fewer labels. These results demonstrate the potential of our framework to facilitate the safe handling of surgical video recordings and enhance data privacy protection in minimally invasive surgery.","PeriodicalId":73821,"journal":{"name":"Journal of medical robotics research","volume":"330 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136265518","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Teleoperated and Automated Control of a Robotic Tool for Targeted Prostate Biopsy. 用于靶向前列腺活检的机器人工具的远程操作和自动控制。
Pub Date : 2023-03-01 Epub Date: 2023-03-18 DOI: 10.1142/s2424905x23400020
Blayton Padasdao, Samuel Lafreniere, Mahsa Rabiei, Zolboo Batsaikhan, Bardia Konh

This work presents a robotic tool with bidirectional manipulation and control capabilities for targeted prostate biopsy interventions. Targeted prostate biopsy is an effective image-guided technique that results in detection of significant cancer with fewer cores and lower number of unnecessary biopsies compared to systematic biopsy. The robotic tool comprises of a compliant flexure section fabricated on a nitinol tube that enables bidirectional bending via actuation of two internal tendons, and a biopsy mechanism for extraction of tissue samples. The kinematic and static models of the compliant flexure section, as well as teleoperated and automated control of the robotic tool are presented and validated with experiments. It was shown that the controller can force the tip of the robotic tool to follow sinusoidal set-point positions with reasonable accuracy in air and inside a phantom tissue. Finally, the capability of the robotic tool to bend, reach targeted positions inside a phantom tissue, and extract a biopsy sample is evaluated.

这项工作提出了一种具有双向操作和控制能力的机器人工具,用于靶向前列腺活检干预。靶向前列腺活组织检查是一种有效的图像引导技术,与系统活组织检查相比,它可以检测到显著的癌症,核心更少,不必要的活组织检查次数更少。该机器人工具包括在镍钛诺管上制造的柔性弯曲部分和用于提取组织样本的活检机构,该柔性弯曲部分能够通过致动两个内部肌腱实现双向弯曲。给出了柔性柔性段的运动学和静态模型,以及机器人工具的遥操作和自动控制,并通过实验进行了验证。结果表明,该控制器可以迫使机器人工具的尖端在空气中和体模组织内部以合理的精度遵循正弦设定点位置。最后,评估了机器人工具弯曲、到达体模组织内目标位置和提取活检样本的能力。
{"title":"Teleoperated and Automated Control of a Robotic Tool for Targeted Prostate Biopsy.","authors":"Blayton Padasdao,&nbsp;Samuel Lafreniere,&nbsp;Mahsa Rabiei,&nbsp;Zolboo Batsaikhan,&nbsp;Bardia Konh","doi":"10.1142/s2424905x23400020","DOIUrl":"10.1142/s2424905x23400020","url":null,"abstract":"<p><p>This work presents a robotic tool with bidirectional manipulation and control capabilities for targeted prostate biopsy interventions. Targeted prostate biopsy is an effective image-guided technique that results in detection of significant cancer with fewer cores and lower number of unnecessary biopsies compared to systematic biopsy. The robotic tool comprises of a compliant flexure section fabricated on a nitinol tube that enables bidirectional bending via actuation of two internal tendons, and a biopsy mechanism for extraction of tissue samples. The kinematic and static models of the compliant flexure section, as well as teleoperated and automated control of the robotic tool are presented and validated with experiments. It was shown that the controller can force the tip of the robotic tool to follow sinusoidal set-point positions with reasonable accuracy in air and inside a phantom tissue. Finally, the capability of the robotic tool to bend, reach targeted positions inside a phantom tissue, and extract a biopsy sample is evaluated.</p>","PeriodicalId":73821,"journal":{"name":"Journal of medical robotics research","volume":"8 1-amp 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10513146/pdf/nihms-1878856.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41164457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Author Index Volume 7 (2022) 作者索引第7卷(2022)
Pub Date : 2022-12-01 DOI: 10.1142/s2424905x2299001x
{"title":"Author Index Volume 7 (2022)","authors":"","doi":"10.1142/s2424905x2299001x","DOIUrl":"https://doi.org/10.1142/s2424905x2299001x","url":null,"abstract":"","PeriodicalId":73821,"journal":{"name":"Journal of medical robotics research","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48445321","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Determining the Significant Kinematic Features for Characterizing Stress during Surgical Tasks Using Spatial Attention. 利用空间注意力确定表征外科手术任务中应力的显著运动特征。
Pub Date : 2022-06-01 Epub Date: 2022-08-22 DOI: 10.1142/s2424905x22410069
Yi Zheng, Grey Leonard, Herbert Zeh, Ann Majewicz Fey

It has been shown that intraoperative stress can have a negative effect on surgeon surgical skills during laparoscopic procedures. For novice surgeons, stressful conditions can lead to significantly higher velocity, acceleration, and jerk of the surgical instrument tips, resulting in faster but less smooth movements. However, it is still not clear which of these kinematic features (velocity, acceleration, or jerk) is the best marker for identifying the normal and stressed conditions. Therefore, in order to find the most significant kinematic feature that is affected by intraoperative stress, we implemented a spatial attention-based Long-Short-Term-Memory (LSTM) classifier. In a prior IRB approved experiment, we collected data from medical students performing an extended peg transfer task who were randomized into a control group and a group performing the task under external psychological stresses. In our prior work, we obtained "representative" normal or stressed movements from this dataset using kinematic data as the input. In this study, a spatial attention mechanism is used to describe the contribution of each kinematic feature to the classification of normal/stressed movements. We tested our classifier under Leave-One-User-Out (LOUO) cross-validation, and the classifier reached an overall accuracy of 77.11% for classifying "representative" normal and stressed movements using kinematic features as the input. More importantly, we also studied the spatial attention extracted from the proposed classifier. Velocity and acceleration on both sides had significantly higher attention for classifying a normal movement (p <= 0.0001); Velocity (p <= 0.015) and jerk (p <= 0.001) on non-dominant hand had significant higher attention for classifying a stressed movement, and it is worthy noting that the attention of jerk on non-dominant hand side had the largest increment when moving from describing normal movements to stressed movements (p = 0.0000). In general, we found that the jerk on non-dominant hand side can be used for characterizing the stressed movements for novice surgeons more effectively.

研究表明,在腹腔镜手术中,术中压力会对外科医生的手术技能产生负面影响。对于新手外科医生来说,压力条件会导致手术器械尖端的速度、加速度和急动明显更高,从而导致更快但不太平稳的运动。然而,目前尚不清楚这些运动学特征(速度、加速度或急动)中的哪一个是识别正常和应力条件的最佳标志。因此,为了找到受术中应激影响的最显著的运动学特征,我们实现了一种基于空间注意力的长短期记忆(LSTM)分类器。在之前IRB批准的一项实验中,我们收集了执行扩展peg转移任务的医学生的数据,他们被随机分为对照组和在外部心理压力下执行该任务的组。在我们之前的工作中,我们使用运动学数据作为输入,从该数据集中获得了“代表性”的正常或应力运动。在这项研究中,空间注意力机制被用来描述每个运动学特征对法向/应力运动分类的贡献。我们在Leave One User Out(LOOO)交叉验证下测试了我们的分类器,该分类器在使用运动学特征作为输入对“代表性”正常和受力运动进行分类时,总体准确率达到77.11%。更重要的是,我们还研究了从所提出的分类器中提取的空间注意力。两侧的速度和加速度在对正常运动进行分类时有更高的关注度(p p p=0.0000)。总的来说,我们发现非优势手侧的急动可以更有效地用于表征新手外科医生的受力运动。
{"title":"Determining the Significant Kinematic Features for Characterizing Stress during Surgical Tasks Using Spatial Attention.","authors":"Yi Zheng, Grey Leonard, Herbert Zeh, Ann Majewicz Fey","doi":"10.1142/s2424905x22410069","DOIUrl":"10.1142/s2424905x22410069","url":null,"abstract":"<p><p>It has been shown that intraoperative stress can have a negative effect on surgeon surgical skills during laparoscopic procedures. For novice surgeons, stressful conditions can lead to significantly higher velocity, acceleration, and jerk of the surgical instrument tips, resulting in faster but less smooth movements. However, it is still not clear which of these kinematic features (velocity, acceleration, or jerk) is the best marker for identifying the normal and stressed conditions. Therefore, in order to find the most significant kinematic feature that is affected by intraoperative stress, we implemented a spatial attention-based Long-Short-Term-Memory (LSTM) classifier. In a prior IRB approved experiment, we collected data from medical students performing an extended peg transfer task who were randomized into a control group and a group performing the task under external psychological stresses. In our prior work, we obtained \"representative\" normal or stressed movements from this dataset using kinematic data as the input. In this study, a spatial attention mechanism is used to describe the contribution of each kinematic feature to the classification of normal/stressed movements. We tested our classifier under Leave-One-User-Out (LOUO) cross-validation, and the classifier reached an overall accuracy of 77.11% for classifying \"representative\" normal and stressed movements using kinematic features as the input. More importantly, we also studied the spatial attention extracted from the proposed classifier. Velocity and acceleration on both sides had significantly higher attention for classifying a normal movement (<i>p</i> <= 0.0001); Velocity (<i>p</i> <= 0.015) and jerk (<i>p</i> <= 0.001) on non-dominant hand had significant higher attention for classifying a stressed movement, and it is worthy noting that the attention of jerk on non-dominant hand side had the largest increment when moving from describing normal movements to stressed movements (<i>p</i> = 0.0000). In general, we found that the jerk on non-dominant hand side can be used for characterizing the stressed movements for novice surgeons more effectively.</p>","PeriodicalId":73821,"journal":{"name":"Journal of medical robotics research","volume":"7 2-3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10289589/pdf/nihms-1903565.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9706151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Design of a 6-DoF Parallel Robotic Platform for MRI Applications. 用于MRI应用的6-DoF并行机器人平台的设计。
Pub Date : 2022-06-01 Epub Date: 2022-06-27 DOI: 10.1142/s2424905x22410057
Mishek Musa, Saikat Sengupta, Yue Chen

In this work, the design, analysis, and characterization of a parallel robotic motion generation platform with 6-degrees of freedom (DoF) for magnetic resonance imaging (MRI) applications are presented. The motivation for the development of this robot is the need for a robotic platform able to produce accurate 6-DoF motion inside the MRI bore to serve as the ground truth for motion modeling; other applications include manipulation of interventional tools such as biopsy and ablation needles and ultrasound probes for therapy and neuromodulation under MRI guidance. The robot is comprised of six pneumatic cylinder actuators controlled via a robust sliding mode controller. Tracking experiments of the pneumatic actuator indicates that the system is able to achieve an average error of 0.69 ± 0.14 mm and 0.67 ± 0.40 mm for step signal tracking and sinusoidal signal tracking, respectively. To demonstrate the feasibility and potential of using the proposed robot for minimally invasive procedures, a phantom experiment was performed in the benchtop environment, which showed a mean positional error of 1.20 ± 0.43 mm and a mean orientational error of 1.09 ± 0.57°, respectively. Experiments conducted in a 3T whole body human MRI scanner indicate that the robot is MRI compatible and capable of achieving positional error of 1.68 ± 0.31 mm and orientational error of 1.51 ± 0.32° inside the scanner, respectively. This study demonstrates the potential of this device to enable accurate 6-DoF motions in the MRI environment.

在这项工作中,介绍了用于磁共振成像(MRI)应用的6自由度并联机器人运动生成平台的设计、分析和表征。开发该机器人的动机是需要一个能够在MRI钻孔内产生精确的6-DoF运动的机器人平台,作为运动建模的基本事实;其他应用包括在MRI指导下操作介入工具,如活检和消融针以及用于治疗和神经调控的超声探头。该机器人由六个气缸致动器组成,通过鲁棒滑模控制器进行控制。气动执行器的跟踪实验表明,该系统对阶跃信号跟踪和正弦信号跟踪的平均误差分别为0.69±0.14mm和0.67±0.40mm。为了证明将所提出的机器人用于微创手术的可行性和潜力,在台式环境中进行了体模实验,结果显示平均位置误差分别为1.20±0.43 mm和1.09±0.57°。在3T全身人体MRI扫描仪上进行的实验表明,该机器人与MRI兼容,能够在扫描仪内分别实现1.68±0.31mm的位置误差和1.51±0.32°的方位误差。这项研究证明了该设备在MRI环境中实现精确6-DoF运动的潜力。
{"title":"Design of a 6-DoF Parallel Robotic Platform for MRI Applications.","authors":"Mishek Musa, Saikat Sengupta, Yue Chen","doi":"10.1142/s2424905x22410057","DOIUrl":"10.1142/s2424905x22410057","url":null,"abstract":"<p><p>In this work, the design, analysis, and characterization of a parallel robotic motion generation platform with 6-degrees of freedom (DoF) for magnetic resonance imaging (MRI) applications are presented. The motivation for the development of this robot is the need for a robotic platform able to produce accurate 6-DoF motion inside the MRI bore to serve as the ground truth for motion modeling; other applications include manipulation of interventional tools such as biopsy and ablation needles and ultrasound probes for therapy and neuromodulation under MRI guidance. The robot is comprised of six pneumatic cylinder actuators controlled via a robust sliding mode controller. Tracking experiments of the pneumatic actuator indicates that the system is able to achieve an average error of 0.69 ± 0.14 mm and 0.67 ± 0.40 mm for step signal tracking and sinusoidal signal tracking, respectively. To demonstrate the feasibility and potential of using the proposed robot for minimally invasive procedures, a phantom experiment was performed in the benchtop environment, which showed a mean positional error of 1.20 ± 0.43 mm and a mean orientational error of 1.09 ± 0.57°, respectively. Experiments conducted in a 3T whole body human MRI scanner indicate that the robot is MRI compatible and capable of achieving positional error of 1.68 ± 0.31 mm and orientational error of 1.51 ± 0.32° inside the scanner, respectively. This study demonstrates the potential of this device to enable accurate 6-DoF motions in the MRI environment.</p>","PeriodicalId":73821,"journal":{"name":"Journal of medical robotics research","volume":"7 2-3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10445425/pdf/nihms-1918436.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10104453","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Author Index Volume 6 (2021) 作者索引第6卷(2021)
Pub Date : 2021-09-01 DOI: 10.1142/s2424905x21990014
{"title":"Author Index Volume 6 (2021)","authors":"","doi":"10.1142/s2424905x21990014","DOIUrl":"https://doi.org/10.1142/s2424905x21990014","url":null,"abstract":"","PeriodicalId":73821,"journal":{"name":"Journal of medical robotics research","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42268753","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of medical robotics research
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1