Arjina Maharjan, Abeer Alsadoon, P W C Prasad, Nada AlSallami, Tarik A Rashid, Ahmad Alrubaie, Sami Haddad
{"title":"在肠道和口腔颌面外科远程呈现中使用混合现实技术的新方案:三维平均值克隆算法。","authors":"Arjina Maharjan, Abeer Alsadoon, P W C Prasad, Nada AlSallami, Tarik A Rashid, Ahmad Alrubaie, Sami Haddad","doi":"10.1002/rcs.2161","DOIUrl":null,"url":null,"abstract":"<p><strong>Background and aim: </strong>Most of the Mixed Reality models used in the surgical telepresence are suffering from the discrepancies in the boundary area and spatial-temporal inconsistency due to the illumination variation in the video frames. The aim behind this work is to propose a new solution that helps produce the composite video by merging the augmented video of the surgery site and virtual hand of the remote expertise surgeon. The purpose of the proposed solution is to decrease the processing time and enhance the accuracy of merged video by decreasing the overlay and visualization error and removing occlusion and artefacts.</p><p><strong>Methodology: </strong>The proposed system enhanced the mean value cloning algorithm that helps to maintain the spatial-temporal consistency of the final composite video. The enhanced algorithm includes the 3D mean value coordinates and improvised mean value interpolant in the image cloning process, which helps to reduce the sawtooth, smudging and discoloration artefacts around the blending region RESULTS: As compared to the state of art solution, the accuracy in terms of overlay error of the proposed solution is improved from 1.01mm to 0.80mm whereas the accuracy in terms of visualization error is improved from 98.8% to 99.4%. The processing time is reduced to 0.173 seconds from 0.211 seconds CONCLUSION: Our solution helps make the object of interest consistent with the light intensity of the target image by adding the space distance that helps maintain the spatial consistency in the final merged video. This article is protected by copyright. All rights reserved.</p>","PeriodicalId":75029,"journal":{"name":"The international journal of medical robotics + computer assisted surgery : MRCAS","volume":" ","pages":"e2161"},"PeriodicalIF":0.0000,"publicationDate":"2020-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A Novel Solution of Using Mixed Reality in Bowel and Oral and Maxillofacial Surgical Telepresence: 3D Mean Value Cloning algorithm.\",\"authors\":\"Arjina Maharjan, Abeer Alsadoon, P W C Prasad, Nada AlSallami, Tarik A Rashid, Ahmad Alrubaie, Sami Haddad\",\"doi\":\"10.1002/rcs.2161\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Background and aim: </strong>Most of the Mixed Reality models used in the surgical telepresence are suffering from the discrepancies in the boundary area and spatial-temporal inconsistency due to the illumination variation in the video frames. The aim behind this work is to propose a new solution that helps produce the composite video by merging the augmented video of the surgery site and virtual hand of the remote expertise surgeon. The purpose of the proposed solution is to decrease the processing time and enhance the accuracy of merged video by decreasing the overlay and visualization error and removing occlusion and artefacts.</p><p><strong>Methodology: </strong>The proposed system enhanced the mean value cloning algorithm that helps to maintain the spatial-temporal consistency of the final composite video. The enhanced algorithm includes the 3D mean value coordinates and improvised mean value interpolant in the image cloning process, which helps to reduce the sawtooth, smudging and discoloration artefacts around the blending region RESULTS: As compared to the state of art solution, the accuracy in terms of overlay error of the proposed solution is improved from 1.01mm to 0.80mm whereas the accuracy in terms of visualization error is improved from 98.8% to 99.4%. The processing time is reduced to 0.173 seconds from 0.211 seconds CONCLUSION: Our solution helps make the object of interest consistent with the light intensity of the target image by adding the space distance that helps maintain the spatial consistency in the final merged video. This article is protected by copyright. All rights reserved.</p>\",\"PeriodicalId\":75029,\"journal\":{\"name\":\"The international journal of medical robotics + computer assisted surgery : MRCAS\",\"volume\":\" \",\"pages\":\"e2161\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-09-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"The international journal of medical robotics + computer assisted surgery : MRCAS\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1002/rcs.2161\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"The international journal of medical robotics + computer assisted surgery : MRCAS","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1002/rcs.2161","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A Novel Solution of Using Mixed Reality in Bowel and Oral and Maxillofacial Surgical Telepresence: 3D Mean Value Cloning algorithm.
Background and aim: Most of the Mixed Reality models used in the surgical telepresence are suffering from the discrepancies in the boundary area and spatial-temporal inconsistency due to the illumination variation in the video frames. The aim behind this work is to propose a new solution that helps produce the composite video by merging the augmented video of the surgery site and virtual hand of the remote expertise surgeon. The purpose of the proposed solution is to decrease the processing time and enhance the accuracy of merged video by decreasing the overlay and visualization error and removing occlusion and artefacts.
Methodology: The proposed system enhanced the mean value cloning algorithm that helps to maintain the spatial-temporal consistency of the final composite video. The enhanced algorithm includes the 3D mean value coordinates and improvised mean value interpolant in the image cloning process, which helps to reduce the sawtooth, smudging and discoloration artefacts around the blending region RESULTS: As compared to the state of art solution, the accuracy in terms of overlay error of the proposed solution is improved from 1.01mm to 0.80mm whereas the accuracy in terms of visualization error is improved from 98.8% to 99.4%. The processing time is reduced to 0.173 seconds from 0.211 seconds CONCLUSION: Our solution helps make the object of interest consistent with the light intensity of the target image by adding the space distance that helps maintain the spatial consistency in the final merged video. This article is protected by copyright. All rights reserved.