Background: This study aimed to explore the effect of BMI on intraoperative conditions and postoperative complications (POCs) in robotic GC surgery.
Methods: This is a retrospective analysis conducted on 60 patients who have GC and received robotic radical gastrectomy (RG) in our hospital. The patients were allocated into normal (18.5 kg/m2 ≤ BMI < 25 kg/m2) and high-BMI groups (BMI ≥ 25 kg/m2). The effect of BMI on intraoperative conditions and POCs was examined.
Results: The results revealed no statistical differences between both groups in terms of surgical procedure (p = 0.669), time of first postoperative flatus (p = 0.172), in-hospital stay (p = 0.454), Retrieved LNs (Lymph nodes) number (p = 1.000) and POCs (p < 0.05). However, the high BMI group had greater intraoperative bleeding (p = 0.018) and longer operating time (p = 0.016).
Conclusions: To conclude, BMI may not affect the safety of RG for GC. Nevertheless, high BMI was associated with increased blood loss and prolonged operative time.
{"title":"Impact of Body Mass Index on Outcomes in Robotic Gastric Cancer Surgery.","authors":"Yujian Xia, Chaoran Yu, Zhaoqiang Chen, Shenjia Wang, Chenglei Yuan, Xiaojun Zhou, Xin Zhao","doi":"10.1002/rcs.70141","DOIUrl":"https://doi.org/10.1002/rcs.70141","url":null,"abstract":"<p><strong>Background: </strong>This study aimed to explore the effect of BMI on intraoperative conditions and postoperative complications (POCs) in robotic GC surgery.</p><p><strong>Methods: </strong>This is a retrospective analysis conducted on 60 patients who have GC and received robotic radical gastrectomy (RG) in our hospital. The patients were allocated into normal (18.5 kg/m<sup>2</sup> ≤ BMI < 25 kg/m<sup>2</sup>) and high-BMI groups (BMI ≥ 25 kg/m<sup>2</sup>). The effect of BMI on intraoperative conditions and POCs was examined.</p><p><strong>Results: </strong>The results revealed no statistical differences between both groups in terms of surgical procedure (p = 0.669), time of first postoperative flatus (p = 0.172), in-hospital stay (p = 0.454), Retrieved LNs (Lymph nodes) number (p = 1.000) and POCs (p < 0.05). However, the high BMI group had greater intraoperative bleeding (p = 0.018) and longer operating time (p = 0.016).</p><p><strong>Conclusions: </strong>To conclude, BMI may not affect the safety of RG for GC. Nevertheless, high BMI was associated with increased blood loss and prolonged operative time.</p>","PeriodicalId":75029,"journal":{"name":"The international journal of medical robotics + computer assisted surgery : MRCAS","volume":"22 1","pages":"e70141"},"PeriodicalIF":0.0,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146108958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Baoping Zhu, Linjie Qu, Linkuan Zhou, Zhenyu Luo, Yan Chen
Background: The foetal head's automatic segmentation from ultrasound imagery is considered a key step in prenatal examination. However, achieving high-quality semi-supervised foetal head image segmentation remains challenging due to low image resolution, unclear boundaries, and inconsistencies between labelled and unlabelled data.
Methods: To overcome these obstacles, we propose MCPNet, a morphological constraint-based copy-paste network for semi-supervised foetal head segmentation, incorporating score-guided morphological refinement (SMR) and copy-paste mixing augmentation (CPMA). SMR employs weighted scores derived from Sobel operators and Euclidean transform to ensure boundary consistency. Additionally, to mitigate the distribution gap between labelled and unlabelled data, we introduce CPMA. This method uses random cropping to swap foreground and background between labelled and unlabelled data.
Results: On the HC18 and PSFH benchmarks, our method achieves Dice scores of 93.72% and 92.31% respectively with 20% labelled data.
Conclusions: The results demonstrate our superior performance and clinical potential.
{"title":"MCPNet: Morphological Constraint-Based Copy-Paste Network for Semi-Supervised Foetal Head Segmentation.","authors":"Baoping Zhu, Linjie Qu, Linkuan Zhou, Zhenyu Luo, Yan Chen","doi":"10.1002/rcs.70140","DOIUrl":"https://doi.org/10.1002/rcs.70140","url":null,"abstract":"<p><strong>Background: </strong>The foetal head's automatic segmentation from ultrasound imagery is considered a key step in prenatal examination. However, achieving high-quality semi-supervised foetal head image segmentation remains challenging due to low image resolution, unclear boundaries, and inconsistencies between labelled and unlabelled data.</p><p><strong>Methods: </strong>To overcome these obstacles, we propose MCPNet, a morphological constraint-based copy-paste network for semi-supervised foetal head segmentation, incorporating score-guided morphological refinement (SMR) and copy-paste mixing augmentation (CPMA). SMR employs weighted scores derived from Sobel operators and Euclidean transform to ensure boundary consistency. Additionally, to mitigate the distribution gap between labelled and unlabelled data, we introduce CPMA. This method uses random cropping to swap foreground and background between labelled and unlabelled data.</p><p><strong>Results: </strong>On the HC18 and PSFH benchmarks, our method achieves Dice scores of 93.72% and 92.31% respectively with 20% labelled data.</p><p><strong>Conclusions: </strong>The results demonstrate our superior performance and clinical potential.</p>","PeriodicalId":75029,"journal":{"name":"The international journal of medical robotics + computer assisted surgery : MRCAS","volume":"22 1","pages":"e70140"},"PeriodicalIF":0.0,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146115021","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ramazan Rajabi, Mehrnaz Aghanouri, Hamid Moradi, Alireza Mirbagheri
Background: The use of robotic telesurgery has increased because of its high accuracy, fewer complications, and remote-control capability. To improve the accuracy of robotic arms in these systems, it is essential to have a precise dynamic model.
Methods: In this study, we focus on the Sinaflex robotic telesurgery system and develop a dynamic model for a novel slave robot. Our approach involves deriving and linearising dynamic equations, defining optimal excitation trajectories, and estimating dynamic parameters using least square optimisation. To investigate the parameters' identification accuracy, the joint torques predicted by the model were compared with those actually obtained from the experiments.
Results: The results reveal that the method accurately predicts joint torques with the root mean square ( RMS) ranging from 0.58 to 1.48 Nm.
Conclusions: Using the proposed method in this paper for identifying the robot dynamic parameters leads to more accurate results for robots with complex mechanisms.
{"title":"Dynamic Modelling of the Surgery Arm in Sina<sub>flex</sub> Robotic Telesurgery System.","authors":"Ramazan Rajabi, Mehrnaz Aghanouri, Hamid Moradi, Alireza Mirbagheri","doi":"10.1002/rcs.70093","DOIUrl":"https://doi.org/10.1002/rcs.70093","url":null,"abstract":"<p><strong>Background: </strong>The use of robotic telesurgery has increased because of its high accuracy, fewer complications, and remote-control capability. To improve the accuracy of robotic arms in these systems, it is essential to have a precise dynamic model.</p><p><strong>Methods: </strong>In this study, we focus on the Sina<sub>flex</sub> robotic telesurgery system and develop a dynamic model for a novel slave robot. Our approach involves deriving and linearising dynamic equations, defining optimal excitation trajectories, and estimating dynamic parameters using least square optimisation. To investigate the parameters' identification accuracy, the joint torques predicted by the model were compared with those actually obtained from the experiments.</p><p><strong>Results: </strong>The results reveal that the method accurately predicts joint torques with the root mean square ( RMS) ranging from 0.58 to 1.48 Nm.</p><p><strong>Conclusions: </strong>Using the proposed method in this paper for identifying the robot dynamic parameters leads to more accurate results for robots with complex mechanisms.</p>","PeriodicalId":75029,"journal":{"name":"The international journal of medical robotics + computer assisted surgery : MRCAS","volume":"21 4","pages":"e70093"},"PeriodicalIF":0.0,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144801190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-08DOI: 10.14701/ahbps.2023s1.bp-pp-4-7
Man-Ling Wang, Bor-Shiuan Shyr, Shih-Chin Chen, Shin-E Wang, Y. Shyr, B. Shyr
BACKGROUND Central pancreatectomy (CP) is an ideal parenchyma-sparing procedure. The experience of r robotic central pancreatectomy (RCP) is very limited. MATERIALS AND METHODS Patients undergoing CP were included. Comparisons were made between RCP and open central pancreatectomy (OCP) groups. RESULTS The most common lesion in patients undergoing CP was serous cystadenoma (35.5%). The median operation time was 4.2 h for RCP versus 5.5 h for OCP. The median blood loss was significantly lower in RCP, 20 c.c. versus 170 c.c., p = 0.001. Postoperative pancreatic fistula occurred in 19.4% of all patients, with 22.1% in RCP and 15.4% in OCP. There was no significant difference regarding other surgical complications between the RCP and OCP groups. Only one patient in the OCP group developed de novo diabetes mellitus (DM), and no steatorrhoea/diarrhoea occurred after either RCP or OCP. CONCLUSIONS RCP is feasible and safe without compromising surgical outcomes and pancreatic functions.
背景:中央胰切除术(CP)是一种理想的保留实质的手术。机器人中央胰腺切除术(RCP)的经验非常有限。材料与方法纳入接受CP的患者。比较RCP组和开放式中央胰切除术(OCP)组。结果CP患者中最常见的病变为浆液性囊腺瘤(35.5%)。RCP的中位手术时间为4.2 h, OCP为5.5 h。中位失血量在RCP组明显较低,分别为20cc和170cc, p = 0.001。术后胰瘘发生率为19.4%,其中RCP为22.1%,OCP为15.4%。RCP组和OCP组在其他手术并发症方面无显著差异。OCP组中仅有1例患者发生新发糖尿病(DM), RCP或OCP后均未发生脂肪漏/腹泻。结论srcp安全可行,不影响手术效果和胰腺功能。
{"title":"Comparison of robotic and open central pancreatectomy.","authors":"Man-Ling Wang, Bor-Shiuan Shyr, Shih-Chin Chen, Shin-E Wang, Y. Shyr, B. Shyr","doi":"10.14701/ahbps.2023s1.bp-pp-4-7","DOIUrl":"https://doi.org/10.14701/ahbps.2023s1.bp-pp-4-7","url":null,"abstract":"BACKGROUND\u0000Central pancreatectomy (CP) is an ideal parenchyma-sparing procedure. The experience of r robotic central pancreatectomy (RCP) is very limited.\u0000\u0000\u0000MATERIALS AND METHODS\u0000Patients undergoing CP were included. Comparisons were made between RCP and open central pancreatectomy (OCP) groups.\u0000\u0000\u0000RESULTS\u0000The most common lesion in patients undergoing CP was serous cystadenoma (35.5%). The median operation time was 4.2 h for RCP versus 5.5 h for OCP. The median blood loss was significantly lower in RCP, 20 c.c. versus 170 c.c., p = 0.001. Postoperative pancreatic fistula occurred in 19.4% of all patients, with 22.1% in RCP and 15.4% in OCP. There was no significant difference regarding other surgical complications between the RCP and OCP groups. Only one patient in the OCP group developed de novo diabetes mellitus (DM), and no steatorrhoea/diarrhoea occurred after either RCP or OCP.\u0000\u0000\u0000CONCLUSIONS\u0000RCP is feasible and safe without compromising surgical outcomes and pancreatic functions.","PeriodicalId":75029,"journal":{"name":"The international journal of medical robotics + computer assisted surgery : MRCAS","volume":"46 1","pages":"e2562"},"PeriodicalIF":0.0,"publicationDate":"2023-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80577274","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Antoniou, A. Georgiou, N. Evripidou, C. Damianou
High‐quality methods for Magnetic Resonance guided Focussed Ultrasound (MRgFUS) therapy planning are needed for safe and efficient clinical practices. Herein, an algorithm for full coverage path planning based on preoperative MR images is presented.
{"title":"Full coverage path planning algorithm for MRgFUS therapy","authors":"A. Antoniou, A. Georgiou, N. Evripidou, C. Damianou","doi":"10.1002/rcs.2389","DOIUrl":"https://doi.org/10.1002/rcs.2389","url":null,"abstract":"High‐quality methods for Magnetic Resonance guided Focussed Ultrasound (MRgFUS) therapy planning are needed for safe and efficient clinical practices. Herein, an algorithm for full coverage path planning based on preoperative MR images is presented.","PeriodicalId":75029,"journal":{"name":"The international journal of medical robotics + computer assisted surgery : MRCAS","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90881617","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Erica Padovan, Giorgia Marullo, L. Tanzi, P. Piazzolla, Sandro Moos, F. Porpiglia, E. Vezzetti
The current study presents a deep learning framework to determine, in real‐time, position and rotation of a target organ from an endoscopic video. These inferred data are used to overlay the 3D model of patient's organ over its real counterpart. The resulting augmented video flow is streamed back to the surgeon as a support during laparoscopic robot‐assisted procedures.
{"title":"A deep learning framework for real‐time 3D model registration in robot‐assisted laparoscopic surgery","authors":"Erica Padovan, Giorgia Marullo, L. Tanzi, P. Piazzolla, Sandro Moos, F. Porpiglia, E. Vezzetti","doi":"10.1002/rcs.2387","DOIUrl":"https://doi.org/10.1002/rcs.2387","url":null,"abstract":"The current study presents a deep learning framework to determine, in real‐time, position and rotation of a target organ from an endoscopic video. These inferred data are used to overlay the 3D model of patient's organ over its real counterpart. The resulting augmented video flow is streamed back to the surgeon as a support during laparoscopic robot‐assisted procedures.","PeriodicalId":75029,"journal":{"name":"The international journal of medical robotics + computer assisted surgery : MRCAS","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78535961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Arjina Maharjan, Abeer Alsadoon, P W C Prasad, Nada AlSallami, Tarik A Rashid, Ahmad Alrubaie, Sami Haddad
Background and aim: Most of the Mixed Reality models used in the surgical telepresence are suffering from the discrepancies in the boundary area and spatial-temporal inconsistency due to the illumination variation in the video frames. The aim behind this work is to propose a new solution that helps produce the composite video by merging the augmented video of the surgery site and virtual hand of the remote expertise surgeon. The purpose of the proposed solution is to decrease the processing time and enhance the accuracy of merged video by decreasing the overlay and visualization error and removing occlusion and artefacts.
Methodology: The proposed system enhanced the mean value cloning algorithm that helps to maintain the spatial-temporal consistency of the final composite video. The enhanced algorithm includes the 3D mean value coordinates and improvised mean value interpolant in the image cloning process, which helps to reduce the sawtooth, smudging and discoloration artefacts around the blending region RESULTS: As compared to the state of art solution, the accuracy in terms of overlay error of the proposed solution is improved from 1.01mm to 0.80mm whereas the accuracy in terms of visualization error is improved from 98.8% to 99.4%. The processing time is reduced to 0.173 seconds from 0.211 seconds CONCLUSION: Our solution helps make the object of interest consistent with the light intensity of the target image by adding the space distance that helps maintain the spatial consistency in the final merged video. This article is protected by copyright. All rights reserved.
{"title":"A Novel Solution of Using Mixed Reality in Bowel and Oral and Maxillofacial Surgical Telepresence: 3D Mean Value Cloning algorithm.","authors":"Arjina Maharjan, Abeer Alsadoon, P W C Prasad, Nada AlSallami, Tarik A Rashid, Ahmad Alrubaie, Sami Haddad","doi":"10.1002/rcs.2161","DOIUrl":"10.1002/rcs.2161","url":null,"abstract":"<p><strong>Background and aim: </strong>Most of the Mixed Reality models used in the surgical telepresence are suffering from the discrepancies in the boundary area and spatial-temporal inconsistency due to the illumination variation in the video frames. The aim behind this work is to propose a new solution that helps produce the composite video by merging the augmented video of the surgery site and virtual hand of the remote expertise surgeon. The purpose of the proposed solution is to decrease the processing time and enhance the accuracy of merged video by decreasing the overlay and visualization error and removing occlusion and artefacts.</p><p><strong>Methodology: </strong>The proposed system enhanced the mean value cloning algorithm that helps to maintain the spatial-temporal consistency of the final composite video. The enhanced algorithm includes the 3D mean value coordinates and improvised mean value interpolant in the image cloning process, which helps to reduce the sawtooth, smudging and discoloration artefacts around the blending region RESULTS: As compared to the state of art solution, the accuracy in terms of overlay error of the proposed solution is improved from 1.01mm to 0.80mm whereas the accuracy in terms of visualization error is improved from 98.8% to 99.4%. The processing time is reduced to 0.173 seconds from 0.211 seconds CONCLUSION: Our solution helps make the object of interest consistent with the light intensity of the target image by adding the space distance that helps maintain the spatial consistency in the final merged video. This article is protected by copyright. All rights reserved.</p>","PeriodicalId":75029,"journal":{"name":"The international journal of medical robotics + computer assisted surgery : MRCAS","volume":" ","pages":"e2161"},"PeriodicalIF":0.0,"publicationDate":"2020-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38440926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nitish Maharjan, Abeer Alsadoon, P W C Prasad, Salma Abdullah, Tarik A Rashid
Background and aim: Image registration and alignment are the main limitations of augmented reality-based knee replacement surgery. This research aims to decrease the registration error, eliminate outcomes that are trapped in local minima to improve the alignment problems, handle the occlusion and maximize the overlapping parts.
Methodology: markerless image registration method was used for Augmented reality-based knee replacement surgery to guide and visualize the surgical operation. While weight least square algorithm was used to enhance stereo camera-based tracking by filling border occlusion in right to left direction and non-border occlusion from left to right direction.
Results: This study has improved video precision to 0.57 mm ∼ 0.61 mm alignment error. Furthermore, with the use of bidirectional points, i.e. Forwards and backwards directional cloud point, the iteration on image registration was decreased. This has led to improved the processing time as well. The processing time of video frames was improved to 7.4 ∼11.74 fps.
Conclusions: It seems clear that this proposed system has focused on overcoming the misalignment difficulty caused by movement of patient and enhancing the AR visualization during knee replacement surgery. The proposed system was reliable and favourable which helps in eliminating alignment error by ascertaining the optimal rigid transformation between two cloud points and removing the outliers and non-Gaussian noise. The proposed augmented reality system helps in accurate visualization and navigation of anatomy of knee such as femur, tibia, cartilage, blood vessels, etc. This article is protected by copyright. All rights reserved.
背景和目的:图像配准和对齐是基于增强现实技术的膝关节置换手术的主要限制因素。本研究旨在减少配准误差,消除陷入局部极小值的结果,以改善配准问题,处理闭塞并最大限度地增加重叠部分。方法:将无标记图像配准方法用于基于增强现实技术的膝关节置换手术,为手术操作提供指导和可视化,同时使用加权最小平方算法,通过填补从右向左方向的边界闭塞和从左向右方向的非边界闭塞,增强基于立体摄像机的跟踪能力:结果:这项研究将视频精度提高到了 0.57 mm ∼ 0.61 mm 的对齐误差。此外,由于使用了双向点,即前进和后退方向的云点,减少了图像配准的迭代次数。这也缩短了处理时间。视频帧的处理时间提高到 7.4 ∼ 11.74 fps:很明显,该系统主要克服了膝关节置换手术过程中因患者移动而造成的错位困难,并增强了 AR 可视化效果。所提议的系统是可靠和有利的,它通过确定两个云点之间的最佳刚性变换以及消除异常值和非高斯噪声,有助于消除对齐误差。拟议的增强现实系统有助于对股骨、胫骨、软骨、血管等膝关节解剖结构进行精确的可视化和导航。本文受版权保护。保留所有权利。
{"title":"A Novel Visualization System of Using Augmented Reality in Knee Replacement Surgery: Enhanced Bidirectional Maximum CorrentropyAlgorithm.","authors":"Nitish Maharjan, Abeer Alsadoon, P W C Prasad, Salma Abdullah, Tarik A Rashid","doi":"10.1002/rcs.2154","DOIUrl":"10.1002/rcs.2154","url":null,"abstract":"<p><strong>Background and aim: </strong>Image registration and alignment are the main limitations of augmented reality-based knee replacement surgery. This research aims to decrease the registration error, eliminate outcomes that are trapped in local minima to improve the alignment problems, handle the occlusion and maximize the overlapping parts.</p><p><strong>Methodology: </strong>markerless image registration method was used for Augmented reality-based knee replacement surgery to guide and visualize the surgical operation. While weight least square algorithm was used to enhance stereo camera-based tracking by filling border occlusion in right to left direction and non-border occlusion from left to right direction.</p><p><strong>Results: </strong>This study has improved video precision to 0.57 mm ∼ 0.61 mm alignment error. Furthermore, with the use of bidirectional points, i.e. Forwards and backwards directional cloud point, the iteration on image registration was decreased. This has led to improved the processing time as well. The processing time of video frames was improved to 7.4 ∼11.74 fps.</p><p><strong>Conclusions: </strong>It seems clear that this proposed system has focused on overcoming the misalignment difficulty caused by movement of patient and enhancing the AR visualization during knee replacement surgery. The proposed system was reliable and favourable which helps in eliminating alignment error by ascertaining the optimal rigid transformation between two cloud points and removing the outliers and non-Gaussian noise. The proposed augmented reality system helps in accurate visualization and navigation of anatomy of knee such as femur, tibia, cartilage, blood vessels, etc. This article is protected by copyright. All rights reserved.</p>","PeriodicalId":75029,"journal":{"name":"The international journal of medical robotics + computer assisted surgery : MRCAS","volume":" ","pages":"e2154"},"PeriodicalIF":0.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38335714","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Background: Traditional fracture reduction surgery cannot ensure the accuracy of the reduction while consuming the physical strength of the surgeon. Although monitoring the fracture reduction process through radiography can improve the accuracy of the reduction, it will bring radiation harm to both patients and surgeons.
Methods: We proposed a novel fracture reduction solution that parallel robot is used for fracture reduction surgery. The binocular camera indirectly obtains the position and posture of the fragment wrapped by the tissue by measuring the posture of the external markers. According to the clinical experience of fracture reduction, a path is designed for fracture reduction. Then using position-based visual serving control the robot to fracture reduction surgery. The study is approved by the Rehabilitation Hospital, National Research Center for Rehabilitation Technical Aids, Beijing , China.
Results: 10 virtual cases of fracture were used for fracture reduction experiments. The simulation and model bone experiments are designed respectively. In model bone experiments, the fragments are reduction without collision. The angulation error after the reduction of this method is:3.3°±1.8°, and the axial rotation error is 0.8°±0.3°, the transverse stagger error and the axial direction error after reduction is 2mm±0.5mm and 2.5mm±1mm. After the reduction surgery, the external fixator is used to assist the fixing, and the deformity will be completely corrected.
Conclusions: The solution can perform fracture reduction surgery with certain accuracy and effectively reduce the number of radiographic uses during surgery, and the collision between fragments is avoided during surgery. This article is protected by copyright. All rights reserved.
{"title":"Indirect visual guided fracture reduction robot based on external markers.","authors":"Zhuoxin Fu, Hao Sun, Xinyu Dong, Jianwen Chen, Hongtao Rong, Yue Guo, Shenxin Lin","doi":"10.1002/rcs.2153","DOIUrl":"10.1002/rcs.2153","url":null,"abstract":"<p><strong>Background: </strong>Traditional fracture reduction surgery cannot ensure the accuracy of the reduction while consuming the physical strength of the surgeon. Although monitoring the fracture reduction process through radiography can improve the accuracy of the reduction, it will bring radiation harm to both patients and surgeons.</p><p><strong>Methods: </strong>We proposed a novel fracture reduction solution that parallel robot is used for fracture reduction surgery. The binocular camera indirectly obtains the position and posture of the fragment wrapped by the tissue by measuring the posture of the external markers. According to the clinical experience of fracture reduction, a path is designed for fracture reduction. Then using position-based visual serving control the robot to fracture reduction surgery. The study is approved by the Rehabilitation Hospital, National Research Center for Rehabilitation Technical Aids, Beijing , China.</p><p><strong>Results: </strong>10 virtual cases of fracture were used for fracture reduction experiments. The simulation and model bone experiments are designed respectively. In model bone experiments, the fragments are reduction without collision. The angulation error after the reduction of this method is:3.3°±1.8°, and the axial rotation error is 0.8°±0.3°, the transverse stagger error and the axial direction error after reduction is 2mm±0.5mm and 2.5mm±1mm. After the reduction surgery, the external fixator is used to assist the fixing, and the deformity will be completely corrected.</p><p><strong>Conclusions: </strong>The solution can perform fracture reduction surgery with certain accuracy and effectively reduce the number of radiographic uses during surgery, and the collision between fragments is avoided during surgery. This article is protected by copyright. All rights reserved.</p>","PeriodicalId":75029,"journal":{"name":"The international journal of medical robotics + computer assisted surgery : MRCAS","volume":" ","pages":"e2153"},"PeriodicalIF":0.0,"publicationDate":"2020-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38279536","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Background: The objective of this study was to evaluate the safety and feasibility of robot-assisted thoracoscopic surgery (RATS).
Methods: From May 2009 to May 2013, 48 patients with intrathoracic lesions underwent RATS with the da Vinci® Surgical System was reported (11 lobectomies, 37 mediastinal tumour resections).
Results: RATS was successfully and safely completed in all 48 patients. Conversion of the operation to open surgery was not needed in any patient. The average operation time was 85.9 min, average blood loss 33 ml, and average hospital stay 3.9 days. No patient required blood transfusion. The only recognized adverse event was the development of a bronchopleural fistula in one patient.
Conclusions: RATS appears feasible and safe in thoracic surgery. More investigation will be needed in order to determine its possible long-term benefits and cost effectiveness.
{"title":"Initial experience of robot-assisted thoracoscopic surgery in China.","authors":"Jia Huang, Qingquan Luo, Qiang Tan, Hao Lin, Liqiang Qian, Xu Lin","doi":"10.1002/rcs.1589","DOIUrl":"https://doi.org/10.1002/rcs.1589","url":null,"abstract":"<p><strong>Background: </strong>The objective of this study was to evaluate the safety and feasibility of robot-assisted thoracoscopic surgery (RATS).</p><p><strong>Methods: </strong>From May 2009 to May 2013, 48 patients with intrathoracic lesions underwent RATS with the da Vinci® Surgical System was reported (11 lobectomies, 37 mediastinal tumour resections).</p><p><strong>Results: </strong>RATS was successfully and safely completed in all 48 patients. Conversion of the operation to open surgery was not needed in any patient. The average operation time was 85.9 min, average blood loss 33 ml, and average hospital stay 3.9 days. No patient required blood transfusion. The only recognized adverse event was the development of a bronchopleural fistula in one patient.</p><p><strong>Conclusions: </strong>RATS appears feasible and safe in thoracic surgery. More investigation will be needed in order to determine its possible long-term benefits and cost effectiveness.</p>","PeriodicalId":75029,"journal":{"name":"The international journal of medical robotics + computer assisted surgery : MRCAS","volume":"10 4","pages":"404-9"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1002/rcs.1589","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"32301792","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}