{"title":"利用深度强化学习在腹腔镜手术中自主反牵引以确保视野安全","authors":"Yuriko Iyama, Yudai Takahashi, Jiahe Chen, Takumi Noda, Kazuaki Hara, Etsuko Kobayashi, Ichiro Sakuma, Naoki Tomii","doi":"10.1007/s11548-024-03264-2","DOIUrl":null,"url":null,"abstract":"<h3 data-test=\"abstract-sub-heading\">Purpose</h3><p>Countertraction is a vital technique in laparoscopic surgery, stretching the tissue surface for incision and dissection. Due to the technical challenges and frequency of countertraction, autonomous countertraction has the potential to significantly reduce surgeons’ workload. Despite several methods proposed for automation, achieving optimal tissue visibility and tension for incision remains unrealized. Therefore, we propose a method for autonomous countertraction that enhances tissue surface planarity and visibility.</p><h3 data-test=\"abstract-sub-heading\">Methods</h3><p>We constructed a neural network that integrates a point cloud convolutional neural network (CNN) with a deep reinforcement learning (RL) model. This network continuously controls the forceps position based on the surface shape observed by a camera and the forceps position. RL is conducted in a physical simulation environment, with verification experiments performed in both simulation and phantom environments. The evaluation was performed based on plane error, representing the average distance between the tissue surface and its least-squares plane, and angle error, indicating the angle between the tissue surface vector and the camera’s optical axis vector.</p><h3 data-test=\"abstract-sub-heading\">Results</h3><p>The plane error decreased under all conditions both simulation and phantom environments, with 93.3% of case showing a reduction in angle error. In simulations, the plane error decreased from <span>\\(3.6 \\pm 1.5{\\text{ mm}}\\)</span> to <span>\\(1.1 \\pm 1.8 {\\text{mm}}\\)</span>, and the angle error from <span>\\(29 \\pm 19 ^\\circ\\)</span> to <span>\\(14 \\pm 13 ^\\circ\\)</span>. In the phantom environment, the plane error decreased from <span>\\(0.96 \\pm 0.24{\\text{ mm}}\\)</span> to <span>\\(0.39 \\pm 0.23 {\\text{mm}}\\)</span>, and the angle error from <span>\\(32 \\pm 29 ^\\circ\\)</span> to <span>\\(17 \\pm 20 ^\\circ\\)</span>.</p><h3 data-test=\"abstract-sub-heading\">Conclusion</h3><p>The proposed neural network was validated in both simulation and phantom experimental settings, confirming that traction control improved tissue planarity and visibility. These results demonstrate the feasibility of automating countertraction using the proposed model.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":"19 1","pages":""},"PeriodicalIF":2.3000,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Autonomous countertraction for secure field of view in laparoscopic surgery using deep reinforcement learning\",\"authors\":\"Yuriko Iyama, Yudai Takahashi, Jiahe Chen, Takumi Noda, Kazuaki Hara, Etsuko Kobayashi, Ichiro Sakuma, Naoki Tomii\",\"doi\":\"10.1007/s11548-024-03264-2\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<h3 data-test=\\\"abstract-sub-heading\\\">Purpose</h3><p>Countertraction is a vital technique in laparoscopic surgery, stretching the tissue surface for incision and dissection. Due to the technical challenges and frequency of countertraction, autonomous countertraction has the potential to significantly reduce surgeons’ workload. Despite several methods proposed for automation, achieving optimal tissue visibility and tension for incision remains unrealized. Therefore, we propose a method for autonomous countertraction that enhances tissue surface planarity and visibility.</p><h3 data-test=\\\"abstract-sub-heading\\\">Methods</h3><p>We constructed a neural network that integrates a point cloud convolutional neural network (CNN) with a deep reinforcement learning (RL) model. This network continuously controls the forceps position based on the surface shape observed by a camera and the forceps position. RL is conducted in a physical simulation environment, with verification experiments performed in both simulation and phantom environments. The evaluation was performed based on plane error, representing the average distance between the tissue surface and its least-squares plane, and angle error, indicating the angle between the tissue surface vector and the camera’s optical axis vector.</p><h3 data-test=\\\"abstract-sub-heading\\\">Results</h3><p>The plane error decreased under all conditions both simulation and phantom environments, with 93.3% of case showing a reduction in angle error. In simulations, the plane error decreased from <span>\\\\(3.6 \\\\pm 1.5{\\\\text{ mm}}\\\\)</span> to <span>\\\\(1.1 \\\\pm 1.8 {\\\\text{mm}}\\\\)</span>, and the angle error from <span>\\\\(29 \\\\pm 19 ^\\\\circ\\\\)</span> to <span>\\\\(14 \\\\pm 13 ^\\\\circ\\\\)</span>. In the phantom environment, the plane error decreased from <span>\\\\(0.96 \\\\pm 0.24{\\\\text{ mm}}\\\\)</span> to <span>\\\\(0.39 \\\\pm 0.23 {\\\\text{mm}}\\\\)</span>, and the angle error from <span>\\\\(32 \\\\pm 29 ^\\\\circ\\\\)</span> to <span>\\\\(17 \\\\pm 20 ^\\\\circ\\\\)</span>.</p><h3 data-test=\\\"abstract-sub-heading\\\">Conclusion</h3><p>The proposed neural network was validated in both simulation and phantom experimental settings, confirming that traction control improved tissue planarity and visibility. These results demonstrate the feasibility of automating countertraction using the proposed model.</p>\",\"PeriodicalId\":51251,\"journal\":{\"name\":\"International Journal of Computer Assisted Radiology and Surgery\",\"volume\":\"19 1\",\"pages\":\"\"},\"PeriodicalIF\":2.3000,\"publicationDate\":\"2024-09-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Journal of Computer Assisted Radiology and Surgery\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://doi.org/10.1007/s11548-024-03264-2\",\"RegionNum\":3,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"ENGINEERING, BIOMEDICAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Computer Assisted Radiology and Surgery","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1007/s11548-024-03264-2","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
Autonomous countertraction for secure field of view in laparoscopic surgery using deep reinforcement learning
Purpose
Countertraction is a vital technique in laparoscopic surgery, stretching the tissue surface for incision and dissection. Due to the technical challenges and frequency of countertraction, autonomous countertraction has the potential to significantly reduce surgeons’ workload. Despite several methods proposed for automation, achieving optimal tissue visibility and tension for incision remains unrealized. Therefore, we propose a method for autonomous countertraction that enhances tissue surface planarity and visibility.
Methods
We constructed a neural network that integrates a point cloud convolutional neural network (CNN) with a deep reinforcement learning (RL) model. This network continuously controls the forceps position based on the surface shape observed by a camera and the forceps position. RL is conducted in a physical simulation environment, with verification experiments performed in both simulation and phantom environments. The evaluation was performed based on plane error, representing the average distance between the tissue surface and its least-squares plane, and angle error, indicating the angle between the tissue surface vector and the camera’s optical axis vector.
Results
The plane error decreased under all conditions both simulation and phantom environments, with 93.3% of case showing a reduction in angle error. In simulations, the plane error decreased from \(3.6 \pm 1.5{\text{ mm}}\) to \(1.1 \pm 1.8 {\text{mm}}\), and the angle error from \(29 \pm 19 ^\circ\) to \(14 \pm 13 ^\circ\). In the phantom environment, the plane error decreased from \(0.96 \pm 0.24{\text{ mm}}\) to \(0.39 \pm 0.23 {\text{mm}}\), and the angle error from \(32 \pm 29 ^\circ\) to \(17 \pm 20 ^\circ\).
Conclusion
The proposed neural network was validated in both simulation and phantom experimental settings, confirming that traction control improved tissue planarity and visibility. These results demonstrate the feasibility of automating countertraction using the proposed model.
期刊介绍:
The International Journal for Computer Assisted Radiology and Surgery (IJCARS) is a peer-reviewed journal that provides a platform for closing the gap between medical and technical disciplines, and encourages interdisciplinary research and development activities in an international environment.