Daniele Amparore, Michele Sica, Paolo Verri, Federico Piramide, Enrico Checcucci, Sabrina De Cillis, Alberto Piana, Davide Campobasso, Mariano Burgio, Edoardo Cisero, Giovanni Busacca, Michele Di Dio, Pietro Piazzolla, Cristian Fiori, Francesco Porpiglia
{"title":"在增强现实引导下进行机器人肾部分切除术时自动重叠三维虚拟图像的计算机视觉和机器学习技术。","authors":"Daniele Amparore, Michele Sica, Paolo Verri, Federico Piramide, Enrico Checcucci, Sabrina De Cillis, Alberto Piana, Davide Campobasso, Mariano Burgio, Edoardo Cisero, Giovanni Busacca, Michele Di Dio, Pietro Piazzolla, Cristian Fiori, Francesco Porpiglia","doi":"10.1177/15330338241229368","DOIUrl":null,"url":null,"abstract":"<p><strong>Objectives: </strong>The research's purpose is to develop a software that automatically integrates and overlay 3D virtual models of kidneys harboring renal masses into the Da Vinci robotic console, assisting surgeon during the intervention.</p><p><strong>Introduction: </strong>Precision medicine, especially in the field of minimally-invasive partial nephrectomy, aims to use 3D virtual models as a guidance for augmented reality robotic procedures. However, the co-registration process of the virtual images over the real operative field is performed manually.</p><p><strong>Methods: </strong>In this prospective study, two strategies for the automatic overlapping of the model over the real kidney were explored: the computer vision technology, leveraging the super-enhancement of the kidney allowed by the intraoperative injection of Indocyanine green for superimposition and the convolutional neural network technology, based on the processing of live images from the endoscope, after a training of the software on frames from prerecorded videos of the same surgery. The work-team, comprising a bioengineer, a software-developer and a surgeon, collaborated to create hyper-accuracy 3D models for automatic 3D-AR-guided RAPN. For each patient, demographic and clinical data were collected.</p><p><strong>Results: </strong>Two groups (group A for the first technology with 12 patients and group B for the second technology with 8 patients) were defined. They showed comparable preoperative and post-operative characteristics. Concerning the first technology the average co-registration time was 7 (3-11) seconds while in the case of the second technology 11 (6-13) seconds. No major intraoperative or postoperative complications were recorded. There were no differences in terms of functional outcomes between the groups at every time-point considered.</p><p><strong>Conclusion: </strong>The first technology allowed a successful anchoring of the 3D model to the kidney, despite minimal manual refinements. The second technology improved kidney automatic detection without relying on indocyanine injection, resulting in better organ boundaries identification during tests. Further studies are needed to confirm this preliminary evidence.</p>","PeriodicalId":22203,"journal":{"name":"Technology in Cancer Research & Treatment","volume":"23 ","pages":"15330338241229368"},"PeriodicalIF":2.7000,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10878218/pdf/","citationCount":"0","resultStr":"{\"title\":\"Computer Vision and Machine-Learning Techniques for Automatic 3D Virtual Images Overlapping During Augmented Reality Guided Robotic Partial Nephrectomy.\",\"authors\":\"Daniele Amparore, Michele Sica, Paolo Verri, Federico Piramide, Enrico Checcucci, Sabrina De Cillis, Alberto Piana, Davide Campobasso, Mariano Burgio, Edoardo Cisero, Giovanni Busacca, Michele Di Dio, Pietro Piazzolla, Cristian Fiori, Francesco Porpiglia\",\"doi\":\"10.1177/15330338241229368\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Objectives: </strong>The research's purpose is to develop a software that automatically integrates and overlay 3D virtual models of kidneys harboring renal masses into the Da Vinci robotic console, assisting surgeon during the intervention.</p><p><strong>Introduction: </strong>Precision medicine, especially in the field of minimally-invasive partial nephrectomy, aims to use 3D virtual models as a guidance for augmented reality robotic procedures. However, the co-registration process of the virtual images over the real operative field is performed manually.</p><p><strong>Methods: </strong>In this prospective study, two strategies for the automatic overlapping of the model over the real kidney were explored: the computer vision technology, leveraging the super-enhancement of the kidney allowed by the intraoperative injection of Indocyanine green for superimposition and the convolutional neural network technology, based on the processing of live images from the endoscope, after a training of the software on frames from prerecorded videos of the same surgery. The work-team, comprising a bioengineer, a software-developer and a surgeon, collaborated to create hyper-accuracy 3D models for automatic 3D-AR-guided RAPN. For each patient, demographic and clinical data were collected.</p><p><strong>Results: </strong>Two groups (group A for the first technology with 12 patients and group B for the second technology with 8 patients) were defined. They showed comparable preoperative and post-operative characteristics. Concerning the first technology the average co-registration time was 7 (3-11) seconds while in the case of the second technology 11 (6-13) seconds. No major intraoperative or postoperative complications were recorded. There were no differences in terms of functional outcomes between the groups at every time-point considered.</p><p><strong>Conclusion: </strong>The first technology allowed a successful anchoring of the 3D model to the kidney, despite minimal manual refinements. The second technology improved kidney automatic detection without relying on indocyanine injection, resulting in better organ boundaries identification during tests. Further studies are needed to confirm this preliminary evidence.</p>\",\"PeriodicalId\":22203,\"journal\":{\"name\":\"Technology in Cancer Research & Treatment\",\"volume\":\"23 \",\"pages\":\"15330338241229368\"},\"PeriodicalIF\":2.7000,\"publicationDate\":\"2024-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10878218/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Technology in Cancer Research & Treatment\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1177/15330338241229368\",\"RegionNum\":4,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"ONCOLOGY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Technology in Cancer Research & Treatment","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1177/15330338241229368","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ONCOLOGY","Score":null,"Total":0}
Computer Vision and Machine-Learning Techniques for Automatic 3D Virtual Images Overlapping During Augmented Reality Guided Robotic Partial Nephrectomy.
Objectives: The research's purpose is to develop a software that automatically integrates and overlay 3D virtual models of kidneys harboring renal masses into the Da Vinci robotic console, assisting surgeon during the intervention.
Introduction: Precision medicine, especially in the field of minimally-invasive partial nephrectomy, aims to use 3D virtual models as a guidance for augmented reality robotic procedures. However, the co-registration process of the virtual images over the real operative field is performed manually.
Methods: In this prospective study, two strategies for the automatic overlapping of the model over the real kidney were explored: the computer vision technology, leveraging the super-enhancement of the kidney allowed by the intraoperative injection of Indocyanine green for superimposition and the convolutional neural network technology, based on the processing of live images from the endoscope, after a training of the software on frames from prerecorded videos of the same surgery. The work-team, comprising a bioengineer, a software-developer and a surgeon, collaborated to create hyper-accuracy 3D models for automatic 3D-AR-guided RAPN. For each patient, demographic and clinical data were collected.
Results: Two groups (group A for the first technology with 12 patients and group B for the second technology with 8 patients) were defined. They showed comparable preoperative and post-operative characteristics. Concerning the first technology the average co-registration time was 7 (3-11) seconds while in the case of the second technology 11 (6-13) seconds. No major intraoperative or postoperative complications were recorded. There were no differences in terms of functional outcomes between the groups at every time-point considered.
Conclusion: The first technology allowed a successful anchoring of the 3D model to the kidney, despite minimal manual refinements. The second technology improved kidney automatic detection without relying on indocyanine injection, resulting in better organ boundaries identification during tests. Further studies are needed to confirm this preliminary evidence.
期刊介绍:
Technology in Cancer Research & Treatment (TCRT) is a JCR-ranked, broad-spectrum, open access, peer-reviewed publication whose aim is to provide researchers and clinicians with a platform to share and discuss developments in the prevention, diagnosis, treatment, and monitoring of cancer.