首页 > 最新文献

Journal of medical robotics research最新文献

英文 中文
Improving the Temporal Accuracy of Eye Gaze Tracking for the da Vinci Surgical System Through Automatic Detection of Decalibration Events and Recalibration 通过自动检测去校准事件和重新校准提高达芬奇手术系统眼球注视跟踪的时间精度
Pub Date : 2024-01-26 DOI: 10.1142/s2424905x24400014
Regine Buter, John J. Han, Ayberk Acar, Yizhou Li, Paola Ruiz Puentes, R. Soberanis-Mukul, Iris Gupta, Joyraj Bhowmick, Ahmed Ghazi, Andreas Maier, Mathias Unberath, Jie Ying Wu
{"title":"Improving the Temporal Accuracy of Eye Gaze Tracking for the da Vinci Surgical System Through Automatic Detection of Decalibration Events and Recalibration","authors":"Regine Buter, John J. Han, Ayberk Acar, Yizhou Li, Paola Ruiz Puentes, R. Soberanis-Mukul, Iris Gupta, Joyraj Bhowmick, Ahmed Ghazi, Andreas Maier, Mathias Unberath, Jie Ying Wu","doi":"10.1142/s2424905x24400014","DOIUrl":"https://doi.org/10.1142/s2424905x24400014","url":null,"abstract":"","PeriodicalId":73821,"journal":{"name":"Journal of medical robotics research","volume":"83 5","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139593590","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Erratum: A Surgical Robotic Framework for Safe and Autonomous Data-Driven Learning and Manipulation of an Unknown Deformable Tissue with an Integrated Critical Space 勘误:安全自主地学习和操纵未知可变形组织的数据驱动型外科机器人框架与综合临界空间
Pub Date : 2024-01-19 DOI: 10.1142/s2424905x23920010
Braden P. Murphy, Manuel Retana, Farshid Alambeigi
{"title":"Erratum: A Surgical Robotic Framework for Safe and Autonomous Data-Driven Learning and Manipulation of an Unknown Deformable Tissue with an Integrated Critical Space","authors":"Braden P. Murphy, Manuel Retana, Farshid Alambeigi","doi":"10.1142/s2424905x23920010","DOIUrl":"https://doi.org/10.1142/s2424905x23920010","url":null,"abstract":"","PeriodicalId":73821,"journal":{"name":"Journal of medical robotics research","volume":"34 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139613277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Development and Evaluation of a Markerless 6 DOF Pose Tracking Method for a Suture Needle from a Robotic Endoscope 开发和评估机器人内窥镜缝合针的无标记 6 DOF 姿态跟踪方法
Pub Date : 2023-12-15 DOI: 10.1142/s2424905x23400093
Yiwei Jiang, Haoying Zhou, Gregory S. Fischer
{"title":"Development and Evaluation of a Markerless 6 DOF Pose Tracking Method for a Suture Needle from a Robotic Endoscope","authors":"Yiwei Jiang, Haoying Zhou, Gregory S. Fischer","doi":"10.1142/s2424905x23400093","DOIUrl":"https://doi.org/10.1142/s2424905x23400093","url":null,"abstract":"","PeriodicalId":73821,"journal":{"name":"Journal of medical robotics research","volume":"4 10","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139000339","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Preliminary theoretical considerations on the stiffness characteristics of a tensegrity joint for the use in dynamic orthoses 关于用于动态矫形器的张力合成关节刚度特性的初步理论考虑
Pub Date : 2023-12-15 DOI: 10.1142/s2424905x23400081
Leon Schaeffer, David Herrmann, Thomas Schratzenstaller, Sebastian Dendorfer, Valter Bohm
{"title":"Preliminary theoretical considerations on the stiffness characteristics of a tensegrity joint for the use in dynamic orthoses","authors":"Leon Schaeffer, David Herrmann, Thomas Schratzenstaller, Sebastian Dendorfer, Valter Bohm","doi":"10.1142/s2424905x23400081","DOIUrl":"https://doi.org/10.1142/s2424905x23400081","url":null,"abstract":"","PeriodicalId":73821,"journal":{"name":"Journal of medical robotics research","volume":"223 21","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138996928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optical Fiber-Based Needle Shape Sensing in Real Tissue: Single Core vs. Multicore Approaches 真实组织中基于光纤的针形传感:单芯与多芯方法
Pub Date : 2023-11-03 DOI: 10.1142/s2424905x23500046
Dimitri A Lezcano, Yernar Zhetpissov, Alexandra Cheng, Jin Seob Kim, Iulian I Iordachita
Flexible needle insertion procedures are common for minimally-invasive surgeries for diagnosing and treating prostate cancer. Bevel-tip needles provide physicians the capability to steer the needle during long insertions to avoid vital anatomical structures in the patient and reduce post-operative patient discomfort. To provide needle placement feedback to the physician, sensors are embedded into needles for determining the real-time 3D shape of the needle during operation without needing to visualize the needle intra-operatively. Through expansive research in fiber optics, a plethora of bio-compatible, MRI-compatible, optical shape-sensors have been developed to provide real-time shape feedback, such as single-core and multicore fiber Bragg gratings. In this paper, we directly compare single-core fiber-based and multicore fiber-based needle shape-sensing through identically constructed, four-active area sensorized bevel-tip needles inserted into phantom and ex-vivo tissue on the same experimental platform. In this work, we found that for shape-sensing in phantom tissue, the two needles performed identically with a p-value of 0.164 > 0.05, but in ex-vivo real tissue, the single-core fiber sensorized needle significantly outperformed the multicore fiber configuration with a p-value of 0.0005 < 0.05. This paper also presents the experimental platform and method for directly comparing these optical shape sensors for the needle shape-sensing task, as well as provides direction, insight and required considerations for future work in constructively optimizing sensorized needles.
在诊断和治疗前列腺癌的微创手术中,灵活的针头插入手术是很常见的。斜尖针头为医生提供了在长时间插入时控制针头的能力,以避免患者的重要解剖结构,减少术后患者的不适。为了向医生提供针头放置的反馈,传感器被嵌入针头中,以便在手术过程中确定针头的实时3D形状,而无需在术中可视化针头。通过对光纤的广泛研究,已经开发出大量生物兼容、核磁共振兼容的光学形状传感器,以提供实时形状反馈,如单核和多核光纤布拉格光栅。在本文中,我们直接比较了单芯纤维和多芯纤维的针头形状传感,通过相同的结构,四主动区域传感器的斜头针头插入到幻体和离体组织在同一个实验平台上。在本研究中,我们发现在模拟组织中,两根针的形状传感性能相同,p值为0.164 > 0.05,但在离体真实组织中,单芯纤维传感针的性能明显优于多芯纤维配置,p值为0.0005 < 0.05。本文还提出了直接比较这些光学形状传感器用于针形传感任务的实验平台和方法,并为今后建设性地优化传感针提供了方向、见解和需要考虑的问题。
{"title":"Optical Fiber-Based Needle Shape Sensing in Real Tissue: Single Core vs. Multicore Approaches","authors":"Dimitri A Lezcano, Yernar Zhetpissov, Alexandra Cheng, Jin Seob Kim, Iulian I Iordachita","doi":"10.1142/s2424905x23500046","DOIUrl":"https://doi.org/10.1142/s2424905x23500046","url":null,"abstract":"Flexible needle insertion procedures are common for minimally-invasive surgeries for diagnosing and treating prostate cancer. Bevel-tip needles provide physicians the capability to steer the needle during long insertions to avoid vital anatomical structures in the patient and reduce post-operative patient discomfort. To provide needle placement feedback to the physician, sensors are embedded into needles for determining the real-time 3D shape of the needle during operation without needing to visualize the needle intra-operatively. Through expansive research in fiber optics, a plethora of bio-compatible, MRI-compatible, optical shape-sensors have been developed to provide real-time shape feedback, such as single-core and multicore fiber Bragg gratings. In this paper, we directly compare single-core fiber-based and multicore fiber-based needle shape-sensing through identically constructed, four-active area sensorized bevel-tip needles inserted into phantom and ex-vivo tissue on the same experimental platform. In this work, we found that for shape-sensing in phantom tissue, the two needles performed identically with a p-value of 0.164 > 0.05, but in ex-vivo real tissue, the single-core fiber sensorized needle significantly outperformed the multicore fiber configuration with a p-value of 0.0005 < 0.05. This paper also presents the experimental platform and method for directly comparing these optical shape sensors for the needle shape-sensing task, as well as provides direction, insight and required considerations for future work in constructively optimizing sensorized needles.","PeriodicalId":73821,"journal":{"name":"Journal of medical robotics research","volume":"184 1‐6","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135775734","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robot-Assisted Vascular Shunt Insertion with the dVRK Surgical Robot 机器人辅助血管分流插入与dVRK手术机器人
Pub Date : 2023-11-03 DOI: 10.1142/s2424905x23400068
Karthik Dharmarajan, Will Panitch, Baiyu Shi, Huang Huang, Lawrence Yunliang Chen, Masoud Moghani, Qinxi Yu, Kush Hari, Thomas Low, Danyal Fer, Animesh Garg, Ken Goldberg
{"title":"Robot-Assisted Vascular Shunt Insertion with the dVRK Surgical Robot","authors":"Karthik Dharmarajan, Will Panitch, Baiyu Shi, Huang Huang, Lawrence Yunliang Chen, Masoud Moghani, Qinxi Yu, Kush Hari, Thomas Low, Danyal Fer, Animesh Garg, Ken Goldberg","doi":"10.1142/s2424905x23400068","DOIUrl":"https://doi.org/10.1142/s2424905x23400068","url":null,"abstract":"","PeriodicalId":73821,"journal":{"name":"Journal of medical robotics research","volume":"89 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135868971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robot Learning Incorporating Human Interventions in the Real World for Autonomous Surgical Endoscopic Camera Control 结合人类干预的机器人学习在现实世界中用于自主手术内窥镜相机控制
Pub Date : 2023-10-13 DOI: 10.1142/s2424905x23400044
Yafei Ou, Sadra Zargarzadeh, Mahdi Tavakoli
Recent studies in surgical robotics have focused on automating common surgical subtasks such as grasping and manipulation using deep reinforcement learning (DRL). In this work, we consider surgical endoscopic camera control for object tracking – e.g., using the endoscopic camera manipulator (ECM) from the da Vinci Research Kit (dVRK) (Intuitive Inc., Sunnyvale, CA, USA) – as a typical surgical robot learning task. A DRL policy for controlling the robot joint space movements is first trained in a simulation environment and then continues the learning in the real world. To speed up training and avoid significant failures (in this case, losing view of the object), human interventions are incorporated into the training process and regular DRL is combined with generative adversarial imitation learning (GAIL) to encourage imitating human behaviors. Experiments show that an average reward of 159.8 can be achieved within 1,000 steps compared to only 121.8 without human interventions, and the view of the moving object is lost only twice during the training process out of 3 trials. These results show that human interventions can improve learning speed and significantly reduce failures during the training process.
{"title":"Robot Learning Incorporating Human Interventions in the Real World for Autonomous Surgical Endoscopic Camera Control","authors":"Yafei Ou, Sadra Zargarzadeh, Mahdi Tavakoli","doi":"10.1142/s2424905x23400044","DOIUrl":"https://doi.org/10.1142/s2424905x23400044","url":null,"abstract":"Recent studies in surgical robotics have focused on automating common surgical subtasks such as grasping and manipulation using deep reinforcement learning (DRL). In this work, we consider surgical endoscopic camera control for object tracking – e.g., using the endoscopic camera manipulator (ECM) from the da Vinci Research Kit (dVRK) (Intuitive Inc., Sunnyvale, CA, USA) – as a typical surgical robot learning task. A DRL policy for controlling the robot joint space movements is first trained in a simulation environment and then continues the learning in the real world. To speed up training and avoid significant failures (in this case, losing view of the object), human interventions are incorporated into the training process and regular DRL is combined with generative adversarial imitation learning (GAIL) to encourage imitating human behaviors. Experiments show that an average reward of 159.8 can be achieved within 1,000 steps compared to only 121.8 without human interventions, and the view of the moving object is lost only twice during the training process out of 3 trials. These results show that human interventions can improve learning speed and significantly reduce failures during the training process.","PeriodicalId":73821,"journal":{"name":"Journal of medical robotics research","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135918007","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatic Detection of Out-of-body Frames in Surgical Videos for Privacy Protection Using Self-supervised Learning and Minimal Labels 基于自监督学习和最小标签的手术视频出体帧隐私保护自动检测
Pub Date : 2023-05-04 DOI: 10.1142/s2424905x23500022
Ziheng Wang, Xi Liu, Conor Perreault, Anthony Jarc
Endoscopic video recordings are widely used in minimally invasive robot-assisted surgery, but when the endoscope is outside the patient’s body, it can capture irrelevant segments that may contain sensitive information. To address this, we propose a framework that accurately detects out-of-body frames in surgical videos by leveraging self-supervision with minimal data labels. We use a massive amount of unlabeled endoscopic images to learn meaningful representations in a self-supervised manner. Our approach, which involves pre-training on an auxiliary task and fine-tuning with limited supervision, outperforms previous methods for detecting out-of-body frames in surgical videos captured from da Vinci X and Xi surgical systems. The average F1 scores range from [Formula: see text] to [Formula: see text]. Remarkably, using only [Formula: see text] of the training labels, our approach still maintains an average F1 score performance above 97, outperforming fully-supervised methods with [Formula: see text] fewer labels. These results demonstrate the potential of our framework to facilitate the safe handling of surgical video recordings and enhance data privacy protection in minimally invasive surgery.
内窥镜录像广泛应用于微创机器人辅助手术,但当内窥镜在患者体外时,它可能会捕捉到可能包含敏感信息的不相关片段。为了解决这个问题,我们提出了一个框架,通过利用最小数据标签的自我监督来准确检测手术视频中的体外帧。我们使用大量未标记的内窥镜图像以自我监督的方式学习有意义的表示。F1的平均分数从[公式:见文]到[公式:见文]不等。值得注意的是,仅使用[Formula: see text]的训练标签,我们的方法仍然保持了平均F1分数在97以上的表现,优于使用[Formula: see text]较少标签的完全监督方法。这些结果证明了我们的框架在促进手术视频记录的安全处理和加强微创手术数据隐私保护方面的潜力。
{"title":"Automatic Detection of Out-of-body Frames in Surgical Videos for Privacy Protection Using Self-supervised Learning and Minimal Labels","authors":"Ziheng Wang, Xi Liu, Conor Perreault, Anthony Jarc","doi":"10.1142/s2424905x23500022","DOIUrl":"https://doi.org/10.1142/s2424905x23500022","url":null,"abstract":"Endoscopic video recordings are widely used in minimally invasive robot-assisted surgery, but when the endoscope is outside the patient’s body, it can capture irrelevant segments that may contain sensitive information. To address this, we propose a framework that accurately detects out-of-body frames in surgical videos by leveraging self-supervision with minimal data labels. We use a massive amount of unlabeled endoscopic images to learn meaningful representations in a self-supervised manner. Our approach, which involves pre-training on an auxiliary task and fine-tuning with limited supervision, outperforms previous methods for detecting out-of-body frames in surgical videos captured from da Vinci X and Xi surgical systems. The average F1 scores range from [Formula: see text] to [Formula: see text]. Remarkably, using only [Formula: see text] of the training labels, our approach still maintains an average F1 score performance above 97, outperforming fully-supervised methods with [Formula: see text] fewer labels. These results demonstrate the potential of our framework to facilitate the safe handling of surgical video recordings and enhance data privacy protection in minimally invasive surgery.","PeriodicalId":73821,"journal":{"name":"Journal of medical robotics research","volume":"330 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136265518","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Teleoperated and Automated Control of a Robotic Tool for Targeted Prostate Biopsy. 用于靶向前列腺活检的机器人工具的远程操作和自动控制。
Pub Date : 2023-03-01 Epub Date: 2023-03-18 DOI: 10.1142/s2424905x23400020
Blayton Padasdao, Samuel Lafreniere, Mahsa Rabiei, Zolboo Batsaikhan, Bardia Konh

This work presents a robotic tool with bidirectional manipulation and control capabilities for targeted prostate biopsy interventions. Targeted prostate biopsy is an effective image-guided technique that results in detection of significant cancer with fewer cores and lower number of unnecessary biopsies compared to systematic biopsy. The robotic tool comprises of a compliant flexure section fabricated on a nitinol tube that enables bidirectional bending via actuation of two internal tendons, and a biopsy mechanism for extraction of tissue samples. The kinematic and static models of the compliant flexure section, as well as teleoperated and automated control of the robotic tool are presented and validated with experiments. It was shown that the controller can force the tip of the robotic tool to follow sinusoidal set-point positions with reasonable accuracy in air and inside a phantom tissue. Finally, the capability of the robotic tool to bend, reach targeted positions inside a phantom tissue, and extract a biopsy sample is evaluated.

这项工作提出了一种具有双向操作和控制能力的机器人工具,用于靶向前列腺活检干预。靶向前列腺活组织检查是一种有效的图像引导技术,与系统活组织检查相比,它可以检测到显著的癌症,核心更少,不必要的活组织检查次数更少。该机器人工具包括在镍钛诺管上制造的柔性弯曲部分和用于提取组织样本的活检机构,该柔性弯曲部分能够通过致动两个内部肌腱实现双向弯曲。给出了柔性柔性段的运动学和静态模型,以及机器人工具的遥操作和自动控制,并通过实验进行了验证。结果表明,该控制器可以迫使机器人工具的尖端在空气中和体模组织内部以合理的精度遵循正弦设定点位置。最后,评估了机器人工具弯曲、到达体模组织内目标位置和提取活检样本的能力。
{"title":"Teleoperated and Automated Control of a Robotic Tool for Targeted Prostate Biopsy.","authors":"Blayton Padasdao,&nbsp;Samuel Lafreniere,&nbsp;Mahsa Rabiei,&nbsp;Zolboo Batsaikhan,&nbsp;Bardia Konh","doi":"10.1142/s2424905x23400020","DOIUrl":"10.1142/s2424905x23400020","url":null,"abstract":"<p><p>This work presents a robotic tool with bidirectional manipulation and control capabilities for targeted prostate biopsy interventions. Targeted prostate biopsy is an effective image-guided technique that results in detection of significant cancer with fewer cores and lower number of unnecessary biopsies compared to systematic biopsy. The robotic tool comprises of a compliant flexure section fabricated on a nitinol tube that enables bidirectional bending via actuation of two internal tendons, and a biopsy mechanism for extraction of tissue samples. The kinematic and static models of the compliant flexure section, as well as teleoperated and automated control of the robotic tool are presented and validated with experiments. It was shown that the controller can force the tip of the robotic tool to follow sinusoidal set-point positions with reasonable accuracy in air and inside a phantom tissue. Finally, the capability of the robotic tool to bend, reach targeted positions inside a phantom tissue, and extract a biopsy sample is evaluated.</p>","PeriodicalId":73821,"journal":{"name":"Journal of medical robotics research","volume":"8 1-amp 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10513146/pdf/nihms-1878856.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41164457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Author Index Volume 7 (2022) 作者索引第7卷(2022)
Pub Date : 2022-12-01 DOI: 10.1142/s2424905x2299001x
{"title":"Author Index Volume 7 (2022)","authors":"","doi":"10.1142/s2424905x2299001x","DOIUrl":"https://doi.org/10.1142/s2424905x2299001x","url":null,"abstract":"","PeriodicalId":73821,"journal":{"name":"Journal of medical robotics research","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48445321","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of medical robotics research
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1