首页 > 最新文献

The international journal of medical robotics + computer assisted surgery : MRCAS最新文献

英文 中文
Impact of Body Mass Index on Outcomes in Robotic Gastric Cancer Surgery. 体重指数对机器人胃癌手术结果的影响。
Yujian Xia, Chaoran Yu, Zhaoqiang Chen, Shenjia Wang, Chenglei Yuan, Xiaojun Zhou, Xin Zhao

Background: This study aimed to explore the effect of BMI on intraoperative conditions and postoperative complications (POCs) in robotic GC surgery.

Methods: This is a retrospective analysis conducted on 60 patients who have GC and received robotic radical gastrectomy (RG) in our hospital. The patients were allocated into normal (18.5 kg/m2 ≤ BMI < 25 kg/m2) and high-BMI groups (BMI ≥ 25 kg/m2). The effect of BMI on intraoperative conditions and POCs was examined.

Results: The results revealed no statistical differences between both groups in terms of surgical procedure (p = 0.669), time of first postoperative flatus (p = 0.172), in-hospital stay (p = 0.454), Retrieved LNs (Lymph nodes) number (p = 1.000) and POCs (p < 0.05). However, the high BMI group had greater intraoperative bleeding (p = 0.018) and longer operating time (p = 0.016).

Conclusions: To conclude, BMI may not affect the safety of RG for GC. Nevertheless, high BMI was associated with increased blood loss and prolonged operative time.

背景:本研究旨在探讨BMI对机器人GC手术中术中状况和术后并发症(POCs)的影响。方法:对我院60例胃癌患者行机器人胃癌根治术(RG)进行回顾性分析。将患者分为正常组(18.5 kg/m2≤BMI 2)和高BMI组(BMI≥25 kg/m2)。观察BMI对术中情况和POCs的影响。结果:两组在手术方式(p = 0.669)、术后首次放屁时间(p = 0.172)、住院时间(p = 0.454)、淋巴结数(p = 1.000)和POCs (p)方面无统计学差异。结论:BMI可能不影响RG治疗GC的安全性。然而,高BMI与出血量增加和手术时间延长有关。
{"title":"Impact of Body Mass Index on Outcomes in Robotic Gastric Cancer Surgery.","authors":"Yujian Xia, Chaoran Yu, Zhaoqiang Chen, Shenjia Wang, Chenglei Yuan, Xiaojun Zhou, Xin Zhao","doi":"10.1002/rcs.70141","DOIUrl":"https://doi.org/10.1002/rcs.70141","url":null,"abstract":"<p><strong>Background: </strong>This study aimed to explore the effect of BMI on intraoperative conditions and postoperative complications (POCs) in robotic GC surgery.</p><p><strong>Methods: </strong>This is a retrospective analysis conducted on 60 patients who have GC and received robotic radical gastrectomy (RG) in our hospital. The patients were allocated into normal (18.5 kg/m<sup>2</sup> ≤ BMI < 25 kg/m<sup>2</sup>) and high-BMI groups (BMI ≥ 25 kg/m<sup>2</sup>). The effect of BMI on intraoperative conditions and POCs was examined.</p><p><strong>Results: </strong>The results revealed no statistical differences between both groups in terms of surgical procedure (p = 0.669), time of first postoperative flatus (p = 0.172), in-hospital stay (p = 0.454), Retrieved LNs (Lymph nodes) number (p = 1.000) and POCs (p < 0.05). However, the high BMI group had greater intraoperative bleeding (p = 0.018) and longer operating time (p = 0.016).</p><p><strong>Conclusions: </strong>To conclude, BMI may not affect the safety of RG for GC. Nevertheless, high BMI was associated with increased blood loss and prolonged operative time.</p>","PeriodicalId":75029,"journal":{"name":"The international journal of medical robotics + computer assisted surgery : MRCAS","volume":"22 1","pages":"e70141"},"PeriodicalIF":0.0,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146108958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MCPNet: Morphological Constraint-Based Copy-Paste Network for Semi-Supervised Foetal Head Segmentation. MCPNet:基于形态约束的半监督胎儿头部分割复制粘贴网络。
Baoping Zhu, Linjie Qu, Linkuan Zhou, Zhenyu Luo, Yan Chen

Background: The foetal head's automatic segmentation from ultrasound imagery is considered a key step in prenatal examination. However, achieving high-quality semi-supervised foetal head image segmentation remains challenging due to low image resolution, unclear boundaries, and inconsistencies between labelled and unlabelled data.

Methods: To overcome these obstacles, we propose MCPNet, a morphological constraint-based copy-paste network for semi-supervised foetal head segmentation, incorporating score-guided morphological refinement (SMR) and copy-paste mixing augmentation (CPMA). SMR employs weighted scores derived from Sobel operators and Euclidean transform to ensure boundary consistency. Additionally, to mitigate the distribution gap between labelled and unlabelled data, we introduce CPMA. This method uses random cropping to swap foreground and background between labelled and unlabelled data.

Results: On the HC18 and PSFH benchmarks, our method achieves Dice scores of 93.72% and 92.31% respectively with 20% labelled data.

Conclusions: The results demonstrate our superior performance and clinical potential.

背景:胎儿头部超声图像的自动分割是产前检查的关键步骤。然而,由于图像分辨率低、边界不清楚以及标记和未标记数据之间的不一致,实现高质量的半监督胎儿头部图像分割仍然具有挑战性。为了克服这些障碍,我们提出了一种基于形态约束的复制-粘贴网络MCPNet,用于半监督胎儿头部分割,结合了分数引导的形态细化(SMR)和复制-粘贴混合增强(CPMA)。SMR采用由Sobel算子和欧几里得变换得到的加权分数来保证边界的一致性。此外,为了减轻标记和未标记数据之间的分布差距,我们引入了CPMA。该方法使用随机裁剪在标记和未标记数据之间交换前景和背景。结果:在HC18和PSFH基准上,我们的方法在20%标记数据的情况下分别获得了93.72%和92.31%的Dice分数。结论:结果显示了我们优越的性能和临床潜力。
{"title":"MCPNet: Morphological Constraint-Based Copy-Paste Network for Semi-Supervised Foetal Head Segmentation.","authors":"Baoping Zhu, Linjie Qu, Linkuan Zhou, Zhenyu Luo, Yan Chen","doi":"10.1002/rcs.70140","DOIUrl":"https://doi.org/10.1002/rcs.70140","url":null,"abstract":"<p><strong>Background: </strong>The foetal head's automatic segmentation from ultrasound imagery is considered a key step in prenatal examination. However, achieving high-quality semi-supervised foetal head image segmentation remains challenging due to low image resolution, unclear boundaries, and inconsistencies between labelled and unlabelled data.</p><p><strong>Methods: </strong>To overcome these obstacles, we propose MCPNet, a morphological constraint-based copy-paste network for semi-supervised foetal head segmentation, incorporating score-guided morphological refinement (SMR) and copy-paste mixing augmentation (CPMA). SMR employs weighted scores derived from Sobel operators and Euclidean transform to ensure boundary consistency. Additionally, to mitigate the distribution gap between labelled and unlabelled data, we introduce CPMA. This method uses random cropping to swap foreground and background between labelled and unlabelled data.</p><p><strong>Results: </strong>On the HC18 and PSFH benchmarks, our method achieves Dice scores of 93.72% and 92.31% respectively with 20% labelled data.</p><p><strong>Conclusions: </strong>The results demonstrate our superior performance and clinical potential.</p>","PeriodicalId":75029,"journal":{"name":"The international journal of medical robotics + computer assisted surgery : MRCAS","volume":"22 1","pages":"e70140"},"PeriodicalIF":0.0,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146115021","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dynamic Modelling of the Surgery Arm in Sinaflex Robotic Telesurgery System. Sinaflex机器人远程手术系统手术臂的动力学建模。
Ramazan Rajabi, Mehrnaz Aghanouri, Hamid Moradi, Alireza Mirbagheri

Background: The use of robotic telesurgery has increased because of its high accuracy, fewer complications, and remote-control capability. To improve the accuracy of robotic arms in these systems, it is essential to have a precise dynamic model.

Methods: In this study, we focus on the Sinaflex robotic telesurgery system and develop a dynamic model for a novel slave robot. Our approach involves deriving and linearising dynamic equations, defining optimal excitation trajectories, and estimating dynamic parameters using least square optimisation. To investigate the parameters' identification accuracy, the joint torques predicted by the model were compared with those actually obtained from the experiments.

Results: The results reveal that the method accurately predicts joint torques with the root mean square ( RMS) ranging from 0.58 to 1.48 Nm.

Conclusions: Using the proposed method in this paper for identifying the robot dynamic parameters leads to more accurate results for robots with complex mechanisms.

背景:机器人远程外科手术因其精度高、并发症少和远程控制能力而得到越来越多的应用。为了提高这些系统中机械臂的精度,必须建立精确的动力学模型。方法:以Sinaflex机器人远程手术系统为研究对象,建立了新型从机器人的动力学模型。我们的方法包括推导和线性化动态方程,定义最佳激励轨迹,并使用最小二乘优化估计动态参数。为了验证参数识别的准确性,将模型预测的关节力矩与实际实验结果进行了比较。结果:该方法能准确预测关节力矩,均方根值(RMS)在0.58 ~ 1.48 Nm之间。结论:采用本文提出的方法对具有复杂机构的机器人进行动力学参数辨识,结果更加准确。
{"title":"Dynamic Modelling of the Surgery Arm in Sina<sub>flex</sub> Robotic Telesurgery System.","authors":"Ramazan Rajabi, Mehrnaz Aghanouri, Hamid Moradi, Alireza Mirbagheri","doi":"10.1002/rcs.70093","DOIUrl":"https://doi.org/10.1002/rcs.70093","url":null,"abstract":"<p><strong>Background: </strong>The use of robotic telesurgery has increased because of its high accuracy, fewer complications, and remote-control capability. To improve the accuracy of robotic arms in these systems, it is essential to have a precise dynamic model.</p><p><strong>Methods: </strong>In this study, we focus on the Sina<sub>flex</sub> robotic telesurgery system and develop a dynamic model for a novel slave robot. Our approach involves deriving and linearising dynamic equations, defining optimal excitation trajectories, and estimating dynamic parameters using least square optimisation. To investigate the parameters' identification accuracy, the joint torques predicted by the model were compared with those actually obtained from the experiments.</p><p><strong>Results: </strong>The results reveal that the method accurately predicts joint torques with the root mean square ( RMS) ranging from 0.58 to 1.48 Nm.</p><p><strong>Conclusions: </strong>Using the proposed method in this paper for identifying the robot dynamic parameters leads to more accurate results for robots with complex mechanisms.</p>","PeriodicalId":75029,"journal":{"name":"The international journal of medical robotics + computer assisted surgery : MRCAS","volume":"21 4","pages":"e70093"},"PeriodicalIF":0.0,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144801190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comparison of robotic and open central pancreatectomy. 机器人与开放式中央胰切除术的比较。
Pub Date : 2023-06-08 DOI: 10.14701/ahbps.2023s1.bp-pp-4-7
Man-Ling Wang, Bor-Shiuan Shyr, Shih-Chin Chen, Shin-E Wang, Y. Shyr, B. Shyr
BACKGROUNDCentral pancreatectomy (CP) is an ideal parenchyma-sparing procedure. The experience of r robotic central pancreatectomy (RCP) is very limited.MATERIALS AND METHODSPatients undergoing CP were included. Comparisons were made between RCP and open central pancreatectomy (OCP) groups.RESULTSThe most common lesion in patients undergoing CP was serous cystadenoma (35.5%). The median operation time was 4.2 h for RCP versus 5.5 h for OCP. The median blood loss was significantly lower in RCP, 20 c.c. versus 170 c.c., p = 0.001. Postoperative pancreatic fistula occurred in 19.4% of all patients, with 22.1% in RCP and 15.4% in OCP. There was no significant difference regarding other surgical complications between the RCP and OCP groups. Only one patient in the OCP group developed de novo diabetes mellitus (DM), and no steatorrhoea/diarrhoea occurred after either RCP or OCP.CONCLUSIONSRCP is feasible and safe without compromising surgical outcomes and pancreatic functions.
背景:中央胰切除术(CP)是一种理想的保留实质的手术。机器人中央胰腺切除术(RCP)的经验非常有限。材料与方法纳入接受CP的患者。比较RCP组和开放式中央胰切除术(OCP)组。结果CP患者中最常见的病变为浆液性囊腺瘤(35.5%)。RCP的中位手术时间为4.2 h, OCP为5.5 h。中位失血量在RCP组明显较低,分别为20cc和170cc, p = 0.001。术后胰瘘发生率为19.4%,其中RCP为22.1%,OCP为15.4%。RCP组和OCP组在其他手术并发症方面无显著差异。OCP组中仅有1例患者发生新发糖尿病(DM), RCP或OCP后均未发生脂肪漏/腹泻。结论srcp安全可行,不影响手术效果和胰腺功能。
{"title":"Comparison of robotic and open central pancreatectomy.","authors":"Man-Ling Wang, Bor-Shiuan Shyr, Shih-Chin Chen, Shin-E Wang, Y. Shyr, B. Shyr","doi":"10.14701/ahbps.2023s1.bp-pp-4-7","DOIUrl":"https://doi.org/10.14701/ahbps.2023s1.bp-pp-4-7","url":null,"abstract":"BACKGROUND\u0000Central pancreatectomy (CP) is an ideal parenchyma-sparing procedure. The experience of r robotic central pancreatectomy (RCP) is very limited.\u0000\u0000\u0000MATERIALS AND METHODS\u0000Patients undergoing CP were included. Comparisons were made between RCP and open central pancreatectomy (OCP) groups.\u0000\u0000\u0000RESULTS\u0000The most common lesion in patients undergoing CP was serous cystadenoma (35.5%). The median operation time was 4.2 h for RCP versus 5.5 h for OCP. The median blood loss was significantly lower in RCP, 20 c.c. versus 170 c.c., p = 0.001. Postoperative pancreatic fistula occurred in 19.4% of all patients, with 22.1% in RCP and 15.4% in OCP. There was no significant difference regarding other surgical complications between the RCP and OCP groups. Only one patient in the OCP group developed de novo diabetes mellitus (DM), and no steatorrhoea/diarrhoea occurred after either RCP or OCP.\u0000\u0000\u0000CONCLUSIONS\u0000RCP is feasible and safe without compromising surgical outcomes and pancreatic functions.","PeriodicalId":75029,"journal":{"name":"The international journal of medical robotics + computer assisted surgery : MRCAS","volume":"46 1","pages":"e2562"},"PeriodicalIF":0.0,"publicationDate":"2023-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80577274","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Full coverage path planning algorithm for MRgFUS therapy MRgFUS治疗全覆盖路径规划算法
A. Antoniou, A. Georgiou, N. Evripidou, C. Damianou
High‐quality methods for Magnetic Resonance guided Focussed Ultrasound (MRgFUS) therapy planning are needed for safe and efficient clinical practices. Herein, an algorithm for full coverage path planning based on preoperative MR images is presented.
为了安全有效的临床实践,需要高质量的磁共振引导聚焦超声(MRgFUS)治疗计划方法。本文提出了一种基于术前MR图像的全覆盖路径规划算法。
{"title":"Full coverage path planning algorithm for MRgFUS therapy","authors":"A. Antoniou, A. Georgiou, N. Evripidou, C. Damianou","doi":"10.1002/rcs.2389","DOIUrl":"https://doi.org/10.1002/rcs.2389","url":null,"abstract":"High‐quality methods for Magnetic Resonance guided Focussed Ultrasound (MRgFUS) therapy planning are needed for safe and efficient clinical practices. Herein, an algorithm for full coverage path planning based on preoperative MR images is presented.","PeriodicalId":75029,"journal":{"name":"The international journal of medical robotics + computer assisted surgery : MRCAS","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90881617","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
A deep learning framework for real‐time 3D model registration in robot‐assisted laparoscopic surgery 机器人辅助腹腔镜手术中实时3D模型注册的深度学习框架
Erica Padovan, Giorgia Marullo, L. Tanzi, P. Piazzolla, Sandro Moos, F. Porpiglia, E. Vezzetti
The current study presents a deep learning framework to determine, in real‐time, position and rotation of a target organ from an endoscopic video. These inferred data are used to overlay the 3D model of patient's organ over its real counterpart. The resulting augmented video flow is streamed back to the surgeon as a support during laparoscopic robot‐assisted procedures.
目前的研究提出了一种深度学习框架,可以从内窥镜视频中实时确定目标器官的位置和旋转。这些推断出来的数据被用来将病人器官的3D模型覆盖在其真实对应的器官上。由此产生的增强视频流在腹腔镜机器人辅助手术期间作为支持流传回给外科医生。
{"title":"A deep learning framework for real‐time 3D model registration in robot‐assisted laparoscopic surgery","authors":"Erica Padovan, Giorgia Marullo, L. Tanzi, P. Piazzolla, Sandro Moos, F. Porpiglia, E. Vezzetti","doi":"10.1002/rcs.2387","DOIUrl":"https://doi.org/10.1002/rcs.2387","url":null,"abstract":"The current study presents a deep learning framework to determine, in real‐time, position and rotation of a target organ from an endoscopic video. These inferred data are used to overlay the 3D model of patient's organ over its real counterpart. The resulting augmented video flow is streamed back to the surgeon as a support during laparoscopic robot‐assisted procedures.","PeriodicalId":75029,"journal":{"name":"The international journal of medical robotics + computer assisted surgery : MRCAS","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78535961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
A Novel Solution of Using Mixed Reality in Bowel and Oral and Maxillofacial Surgical Telepresence: 3D Mean Value Cloning algorithm. 在肠道和口腔颌面外科远程呈现中使用混合现实技术的新方案:三维平均值克隆算法。
Arjina Maharjan, Abeer Alsadoon, P W C Prasad, Nada AlSallami, Tarik A Rashid, Ahmad Alrubaie, Sami Haddad

Background and aim: Most of the Mixed Reality models used in the surgical telepresence are suffering from the discrepancies in the boundary area and spatial-temporal inconsistency due to the illumination variation in the video frames. The aim behind this work is to propose a new solution that helps produce the composite video by merging the augmented video of the surgery site and virtual hand of the remote expertise surgeon. The purpose of the proposed solution is to decrease the processing time and enhance the accuracy of merged video by decreasing the overlay and visualization error and removing occlusion and artefacts.

Methodology: The proposed system enhanced the mean value cloning algorithm that helps to maintain the spatial-temporal consistency of the final composite video. The enhanced algorithm includes the 3D mean value coordinates and improvised mean value interpolant in the image cloning process, which helps to reduce the sawtooth, smudging and discoloration artefacts around the blending region RESULTS: As compared to the state of art solution, the accuracy in terms of overlay error of the proposed solution is improved from 1.01mm to 0.80mm whereas the accuracy in terms of visualization error is improved from 98.8% to 99.4%. The processing time is reduced to 0.173 seconds from 0.211 seconds CONCLUSION: Our solution helps make the object of interest consistent with the light intensity of the target image by adding the space distance that helps maintain the spatial consistency in the final merged video. This article is protected by copyright. All rights reserved.

背景和目的:由于视频帧中的光照变化,大多数用于远程呈现手术的混合现实模型都存在边界区域差异和时空不一致的问题。这项工作的目的是提出一种新的解决方案,通过合并手术现场的增强视频和远程专业外科医生的虚拟手来帮助生成复合视频。提出解决方案的目的是减少叠加和可视化误差,消除遮挡和伪影,从而缩短处理时间,提高合并视频的准确性:拟议的系统增强了平均值克隆算法,有助于保持最终合成视频的时空一致性。增强算法包括图像克隆过程中的三维均值坐标和改进的均值插值法,有助于减少混合区域周围的锯齿、污点和变色伪影 结果:与现有解决方案相比,拟议解决方案的叠加误差精度从 1.01 毫米提高到 0.80 毫米,可视化误差精度从 98.8%提高到 99.4%。处理时间从 0.211 秒缩短到 0.173 秒 结论:我们的解决方案通过增加空间距离,使感兴趣对象与目标图像的光强度保持一致,从而有助于保持最终合并视频的空间一致性。本文受版权保护。保留所有权利。
{"title":"A Novel Solution of Using Mixed Reality in Bowel and Oral and Maxillofacial Surgical Telepresence: 3D Mean Value Cloning algorithm.","authors":"Arjina Maharjan, Abeer Alsadoon, P W C Prasad, Nada AlSallami, Tarik A Rashid, Ahmad Alrubaie, Sami Haddad","doi":"10.1002/rcs.2161","DOIUrl":"10.1002/rcs.2161","url":null,"abstract":"<p><strong>Background and aim: </strong>Most of the Mixed Reality models used in the surgical telepresence are suffering from the discrepancies in the boundary area and spatial-temporal inconsistency due to the illumination variation in the video frames. The aim behind this work is to propose a new solution that helps produce the composite video by merging the augmented video of the surgery site and virtual hand of the remote expertise surgeon. The purpose of the proposed solution is to decrease the processing time and enhance the accuracy of merged video by decreasing the overlay and visualization error and removing occlusion and artefacts.</p><p><strong>Methodology: </strong>The proposed system enhanced the mean value cloning algorithm that helps to maintain the spatial-temporal consistency of the final composite video. The enhanced algorithm includes the 3D mean value coordinates and improvised mean value interpolant in the image cloning process, which helps to reduce the sawtooth, smudging and discoloration artefacts around the blending region RESULTS: As compared to the state of art solution, the accuracy in terms of overlay error of the proposed solution is improved from 1.01mm to 0.80mm whereas the accuracy in terms of visualization error is improved from 98.8% to 99.4%. The processing time is reduced to 0.173 seconds from 0.211 seconds CONCLUSION: Our solution helps make the object of interest consistent with the light intensity of the target image by adding the space distance that helps maintain the spatial consistency in the final merged video. This article is protected by copyright. All rights reserved.</p>","PeriodicalId":75029,"journal":{"name":"The international journal of medical robotics + computer assisted surgery : MRCAS","volume":" ","pages":"e2161"},"PeriodicalIF":0.0,"publicationDate":"2020-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38440926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Novel Visualization System of Using Augmented Reality in Knee Replacement Surgery: Enhanced Bidirectional Maximum CorrentropyAlgorithm. 在膝关节置换手术中使用增强现实技术的新型可视化系统:增强型双向最大熵算法
Nitish Maharjan, Abeer Alsadoon, P W C Prasad, Salma Abdullah, Tarik A Rashid

Background and aim: Image registration and alignment are the main limitations of augmented reality-based knee replacement surgery. This research aims to decrease the registration error, eliminate outcomes that are trapped in local minima to improve the alignment problems, handle the occlusion and maximize the overlapping parts.

Methodology: markerless image registration method was used for Augmented reality-based knee replacement surgery to guide and visualize the surgical operation. While weight least square algorithm was used to enhance stereo camera-based tracking by filling border occlusion in right to left direction and non-border occlusion from left to right direction.

Results: This study has improved video precision to 0.57 mm ∼ 0.61 mm alignment error. Furthermore, with the use of bidirectional points, i.e. Forwards and backwards directional cloud point, the iteration on image registration was decreased. This has led to improved the processing time as well. The processing time of video frames was improved to 7.4 ∼11.74 fps.

Conclusions: It seems clear that this proposed system has focused on overcoming the misalignment difficulty caused by movement of patient and enhancing the AR visualization during knee replacement surgery. The proposed system was reliable and favourable which helps in eliminating alignment error by ascertaining the optimal rigid transformation between two cloud points and removing the outliers and non-Gaussian noise. The proposed augmented reality system helps in accurate visualization and navigation of anatomy of knee such as femur, tibia, cartilage, blood vessels, etc. This article is protected by copyright. All rights reserved.

背景和目的:图像配准和对齐是基于增强现实技术的膝关节置换手术的主要限制因素。本研究旨在减少配准误差,消除陷入局部极小值的结果,以改善配准问题,处理闭塞并最大限度地增加重叠部分。方法:将无标记图像配准方法用于基于增强现实技术的膝关节置换手术,为手术操作提供指导和可视化,同时使用加权最小平方算法,通过填补从右向左方向的边界闭塞和从左向右方向的非边界闭塞,增强基于立体摄像机的跟踪能力:结果:这项研究将视频精度提高到了 0.57 mm ∼ 0.61 mm 的对齐误差。此外,由于使用了双向点,即前进和后退方向的云点,减少了图像配准的迭代次数。这也缩短了处理时间。视频帧的处理时间提高到 7.4 ∼ 11.74 fps:很明显,该系统主要克服了膝关节置换手术过程中因患者移动而造成的错位困难,并增强了 AR 可视化效果。所提议的系统是可靠和有利的,它通过确定两个云点之间的最佳刚性变换以及消除异常值和非高斯噪声,有助于消除对齐误差。拟议的增强现实系统有助于对股骨、胫骨、软骨、血管等膝关节解剖结构进行精确的可视化和导航。本文受版权保护。保留所有权利。
{"title":"A Novel Visualization System of Using Augmented Reality in Knee Replacement Surgery: Enhanced Bidirectional Maximum CorrentropyAlgorithm.","authors":"Nitish Maharjan, Abeer Alsadoon, P W C Prasad, Salma Abdullah, Tarik A Rashid","doi":"10.1002/rcs.2154","DOIUrl":"10.1002/rcs.2154","url":null,"abstract":"<p><strong>Background and aim: </strong>Image registration and alignment are the main limitations of augmented reality-based knee replacement surgery. This research aims to decrease the registration error, eliminate outcomes that are trapped in local minima to improve the alignment problems, handle the occlusion and maximize the overlapping parts.</p><p><strong>Methodology: </strong>markerless image registration method was used for Augmented reality-based knee replacement surgery to guide and visualize the surgical operation. While weight least square algorithm was used to enhance stereo camera-based tracking by filling border occlusion in right to left direction and non-border occlusion from left to right direction.</p><p><strong>Results: </strong>This study has improved video precision to 0.57 mm ∼ 0.61 mm alignment error. Furthermore, with the use of bidirectional points, i.e. Forwards and backwards directional cloud point, the iteration on image registration was decreased. This has led to improved the processing time as well. The processing time of video frames was improved to 7.4 ∼11.74 fps.</p><p><strong>Conclusions: </strong>It seems clear that this proposed system has focused on overcoming the misalignment difficulty caused by movement of patient and enhancing the AR visualization during knee replacement surgery. The proposed system was reliable and favourable which helps in eliminating alignment error by ascertaining the optimal rigid transformation between two cloud points and removing the outliers and non-Gaussian noise. The proposed augmented reality system helps in accurate visualization and navigation of anatomy of knee such as femur, tibia, cartilage, blood vessels, etc. This article is protected by copyright. All rights reserved.</p>","PeriodicalId":75029,"journal":{"name":"The international journal of medical robotics + computer assisted surgery : MRCAS","volume":" ","pages":"e2154"},"PeriodicalIF":0.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38335714","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Indirect visual guided fracture reduction robot based on external markers. 基于外部标记的间接视觉引导骨折复位机器人。
Zhuoxin Fu, Hao Sun, Xinyu Dong, Jianwen Chen, Hongtao Rong, Yue Guo, Shenxin Lin

Background: Traditional fracture reduction surgery cannot ensure the accuracy of the reduction while consuming the physical strength of the surgeon. Although monitoring the fracture reduction process through radiography can improve the accuracy of the reduction, it will bring radiation harm to both patients and surgeons.

Methods: We proposed a novel fracture reduction solution that parallel robot is used for fracture reduction surgery. The binocular camera indirectly obtains the position and posture of the fragment wrapped by the tissue by measuring the posture of the external markers. According to the clinical experience of fracture reduction, a path is designed for fracture reduction. Then using position-based visual serving control the robot to fracture reduction surgery. The study is approved by the Rehabilitation Hospital, National Research Center for Rehabilitation Technical Aids, Beijing , China.

Results: 10 virtual cases of fracture were used for fracture reduction experiments. The simulation and model bone experiments are designed respectively. In model bone experiments, the fragments are reduction without collision. The angulation error after the reduction of this method is:3.3°±1.8°, and the axial rotation error is 0.8°±0.3°, the transverse stagger error and the axial direction error after reduction is 2mm±0.5mm and 2.5mm±1mm. After the reduction surgery, the external fixator is used to assist the fixing, and the deformity will be completely corrected.

Conclusions: The solution can perform fracture reduction surgery with certain accuracy and effectively reduce the number of radiographic uses during surgery, and the collision between fragments is avoided during surgery. This article is protected by copyright. All rights reserved.

背景:传统的骨折复位手术在消耗医生体力的同时,无法保证复位的准确性。虽然通过放射摄影监控骨折复位过程可以提高复位的准确性,但会给患者和外科医生带来辐射伤害:方法:我们提出了一种新颖的骨折复位方案,即使用并联机器人进行骨折复位手术。方法:我们提出了一种新颖的骨折复位方案,即使用平行机器人进行骨折复位手术,双目摄像头通过测量外部标记物的姿态,间接获取被组织包裹的碎片的位置和姿态。根据骨折复位的临床经验,设计出骨折复位路径。然后利用基于位置的视觉服务控制机器人进行骨折复位手术。本研究经中国北京国家康复技术辅助器具研究中心康复医院批准:结果:采用 10 例虚拟骨折病例进行骨折复位实验。分别设计了模拟实验和模型骨实验。在模型骨实验中,骨折片在没有碰撞的情况下进行还原。该方法复位后的成角误差为:3.3°±1.8°,轴向旋转误差为 0.8°±0.3°,复位后的横向错开误差为 2mm±0.5mm,轴向误差为 2.5mm±1mm。复位术后使用外固定器辅助固定,畸形完全矫正:该方案能够在一定的精确度上实施骨折复位手术,有效减少了手术过程中射线的使用次数,避免了手术过程中碎片之间的碰撞。本文受版权保护。保留所有权利。
{"title":"Indirect visual guided fracture reduction robot based on external markers.","authors":"Zhuoxin Fu, Hao Sun, Xinyu Dong, Jianwen Chen, Hongtao Rong, Yue Guo, Shenxin Lin","doi":"10.1002/rcs.2153","DOIUrl":"10.1002/rcs.2153","url":null,"abstract":"<p><strong>Background: </strong>Traditional fracture reduction surgery cannot ensure the accuracy of the reduction while consuming the physical strength of the surgeon. Although monitoring the fracture reduction process through radiography can improve the accuracy of the reduction, it will bring radiation harm to both patients and surgeons.</p><p><strong>Methods: </strong>We proposed a novel fracture reduction solution that parallel robot is used for fracture reduction surgery. The binocular camera indirectly obtains the position and posture of the fragment wrapped by the tissue by measuring the posture of the external markers. According to the clinical experience of fracture reduction, a path is designed for fracture reduction. Then using position-based visual serving control the robot to fracture reduction surgery. The study is approved by the Rehabilitation Hospital, National Research Center for Rehabilitation Technical Aids, Beijing , China.</p><p><strong>Results: </strong>10 virtual cases of fracture were used for fracture reduction experiments. The simulation and model bone experiments are designed respectively. In model bone experiments, the fragments are reduction without collision. The angulation error after the reduction of this method is:3.3°±1.8°, and the axial rotation error is 0.8°±0.3°, the transverse stagger error and the axial direction error after reduction is 2mm±0.5mm and 2.5mm±1mm. After the reduction surgery, the external fixator is used to assist the fixing, and the deformity will be completely corrected.</p><p><strong>Conclusions: </strong>The solution can perform fracture reduction surgery with certain accuracy and effectively reduce the number of radiographic uses during surgery, and the collision between fragments is avoided during surgery. This article is protected by copyright. All rights reserved.</p>","PeriodicalId":75029,"journal":{"name":"The international journal of medical robotics + computer assisted surgery : MRCAS","volume":" ","pages":"e2153"},"PeriodicalIF":0.0,"publicationDate":"2020-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38279536","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Initial experience of robot-assisted thoracoscopic surgery in China. 机器人辅助胸腔镜手术在中国的初步经验。
Pub Date : 2014-12-01 Epub Date: 2014-04-29 DOI: 10.1002/rcs.1589
Jia Huang, Qingquan Luo, Qiang Tan, Hao Lin, Liqiang Qian, Xu Lin

Background: The objective of this study was to evaluate the safety and feasibility of robot-assisted thoracoscopic surgery (RATS).

Methods: From May 2009 to May 2013, 48 patients with intrathoracic lesions underwent RATS with the da Vinci® Surgical System was reported (11 lobectomies, 37 mediastinal tumour resections).

Results: RATS was successfully and safely completed in all 48 patients. Conversion of the operation to open surgery was not needed in any patient. The average operation time was 85.9 min, average blood loss 33 ml, and average hospital stay 3.9 days. No patient required blood transfusion. The only recognized adverse event was the development of a bronchopleural fistula in one patient.

Conclusions: RATS appears feasible and safe in thoracic surgery. More investigation will be needed in order to determine its possible long-term benefits and cost effectiveness.

背景:本研究的目的是评估机器人辅助胸腔镜手术(RATS)的安全性和可行性。方法:2009年5月至2013年5月,48例胸腔内病变患者采用达芬奇®手术系统行大鼠肺切除术(11例肺叶切除术,37例纵隔肿瘤切除术)。结果:48例患者均成功安全完成大鼠实验。所有患者均无需转开腹手术。平均手术时间85.9 min,平均失血量33 ml,平均住院时间3.9 d。没有病人需要输血。唯一确认的不良事件是一名患者发生支气管胸膜瘘。结论:大鼠实验在胸外科手术中是可行和安全的。需要进行更多的调查,以确定其可能的长期效益和成本效益。
{"title":"Initial experience of robot-assisted thoracoscopic surgery in China.","authors":"Jia Huang,&nbsp;Qingquan Luo,&nbsp;Qiang Tan,&nbsp;Hao Lin,&nbsp;Liqiang Qian,&nbsp;Xu Lin","doi":"10.1002/rcs.1589","DOIUrl":"https://doi.org/10.1002/rcs.1589","url":null,"abstract":"<p><strong>Background: </strong>The objective of this study was to evaluate the safety and feasibility of robot-assisted thoracoscopic surgery (RATS).</p><p><strong>Methods: </strong>From May 2009 to May 2013, 48 patients with intrathoracic lesions underwent RATS with the da Vinci® Surgical System was reported (11 lobectomies, 37 mediastinal tumour resections).</p><p><strong>Results: </strong>RATS was successfully and safely completed in all 48 patients. Conversion of the operation to open surgery was not needed in any patient. The average operation time was 85.9 min, average blood loss 33 ml, and average hospital stay 3.9 days. No patient required blood transfusion. The only recognized adverse event was the development of a bronchopleural fistula in one patient.</p><p><strong>Conclusions: </strong>RATS appears feasible and safe in thoracic surgery. More investigation will be needed in order to determine its possible long-term benefits and cost effectiveness.</p>","PeriodicalId":75029,"journal":{"name":"The international journal of medical robotics + computer assisted surgery : MRCAS","volume":"10 4","pages":"404-9"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1002/rcs.1589","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"32301792","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
期刊
The international journal of medical robotics + computer assisted surgery : MRCAS
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1