{"title":"MFF-Net:用于颅内手术器械的多尺度特征融合语义分割网络。","authors":"Zhenzhong Liu, Laiwang Zheng, Shubin Yang, Zichen Zhong, Guobin Zhang","doi":"10.1002/rcs.2595","DOIUrl":null,"url":null,"abstract":"<div>\n \n \n <section>\n \n <h3> Background</h3>\n \n <p>In robot-assisted surgery, automatic segmentation of surgical instrument images is crucial for surgical safety. The proposed method addresses challenges in the craniotomy environment, such as occlusion and illumination, through an efficient surgical instrument segmentation network.</p>\n </section>\n \n <section>\n \n <h3> Methods</h3>\n \n <p>The network uses YOLOv8 as the target detection framework and integrates a semantic segmentation head to achieve detection and segmentation capabilities. A concatenation of multi-channel feature maps is designed to enhance model generalisation by fusing deep and shallow features. The innovative GBC2f module ensures the lightweight of the network and the ability to capture global information.</p>\n </section>\n \n <section>\n \n <h3> Results</h3>\n \n <p>Experimental validation of the intracranial glioma surgical instrument dataset shows excellent performance: 94.9% MPA score, 89.9% MIoU value, and 126.6 FPS.</p>\n </section>\n \n <section>\n \n <h3> Conclusions</h3>\n \n <p>According to the experimental results, the segmentation model proposed in this study has significant advantages over other state-of-the-art models. This provides a valuable reference for the further development of intelligent surgical robots.</p>\n </section>\n </div>","PeriodicalId":50311,"journal":{"name":"International Journal of Medical Robotics and Computer Assisted Surgery","volume":"20 1","pages":""},"PeriodicalIF":2.3000,"publicationDate":"2023-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"MFF-Net: Multiscale feature fusion semantic segmentation network for intracranial surgical instruments\",\"authors\":\"Zhenzhong Liu, Laiwang Zheng, Shubin Yang, Zichen Zhong, Guobin Zhang\",\"doi\":\"10.1002/rcs.2595\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div>\\n \\n \\n <section>\\n \\n <h3> Background</h3>\\n \\n <p>In robot-assisted surgery, automatic segmentation of surgical instrument images is crucial for surgical safety. The proposed method addresses challenges in the craniotomy environment, such as occlusion and illumination, through an efficient surgical instrument segmentation network.</p>\\n </section>\\n \\n <section>\\n \\n <h3> Methods</h3>\\n \\n <p>The network uses YOLOv8 as the target detection framework and integrates a semantic segmentation head to achieve detection and segmentation capabilities. A concatenation of multi-channel feature maps is designed to enhance model generalisation by fusing deep and shallow features. The innovative GBC2f module ensures the lightweight of the network and the ability to capture global information.</p>\\n </section>\\n \\n <section>\\n \\n <h3> Results</h3>\\n \\n <p>Experimental validation of the intracranial glioma surgical instrument dataset shows excellent performance: 94.9% MPA score, 89.9% MIoU value, and 126.6 FPS.</p>\\n </section>\\n \\n <section>\\n \\n <h3> Conclusions</h3>\\n \\n <p>According to the experimental results, the segmentation model proposed in this study has significant advantages over other state-of-the-art models. This provides a valuable reference for the further development of intelligent surgical robots.</p>\\n </section>\\n </div>\",\"PeriodicalId\":50311,\"journal\":{\"name\":\"International Journal of Medical Robotics and Computer Assisted Surgery\",\"volume\":\"20 1\",\"pages\":\"\"},\"PeriodicalIF\":2.3000,\"publicationDate\":\"2023-11-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Journal of Medical Robotics and Computer Assisted Surgery\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1002/rcs.2595\",\"RegionNum\":3,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"SURGERY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Medical Robotics and Computer Assisted Surgery","FirstCategoryId":"3","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/rcs.2595","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"SURGERY","Score":null,"Total":0}
In robot-assisted surgery, automatic segmentation of surgical instrument images is crucial for surgical safety. The proposed method addresses challenges in the craniotomy environment, such as occlusion and illumination, through an efficient surgical instrument segmentation network.
Methods
The network uses YOLOv8 as the target detection framework and integrates a semantic segmentation head to achieve detection and segmentation capabilities. A concatenation of multi-channel feature maps is designed to enhance model generalisation by fusing deep and shallow features. The innovative GBC2f module ensures the lightweight of the network and the ability to capture global information.
Results
Experimental validation of the intracranial glioma surgical instrument dataset shows excellent performance: 94.9% MPA score, 89.9% MIoU value, and 126.6 FPS.
Conclusions
According to the experimental results, the segmentation model proposed in this study has significant advantages over other state-of-the-art models. This provides a valuable reference for the further development of intelligent surgical robots.
期刊介绍:
The International Journal of Medical Robotics and Computer Assisted Surgery provides a cross-disciplinary platform for presenting the latest developments in robotics and computer assisted technologies for medical applications. The journal publishes cutting-edge papers and expert reviews, complemented by commentaries, correspondence and conference highlights that stimulate discussion and exchange of ideas. Areas of interest include robotic surgery aids and systems, operative planning tools, medical imaging and visualisation, simulation and navigation, virtual reality, intuitive command and control systems, haptics and sensor technologies. In addition to research and surgical planning studies, the journal welcomes papers detailing clinical trials and applications of computer-assisted workflows and robotic systems in neurosurgery, urology, paediatric, orthopaedic, craniofacial, cardiovascular, thoraco-abdominal, musculoskeletal and visceral surgery. Articles providing critical analysis of clinical trials, assessment of the benefits and risks of the application of these technologies, commenting on ease of use, or addressing surgical education and training issues are also encouraged. The journal aims to foster a community that encompasses medical practitioners, researchers, and engineers and computer scientists developing robotic systems and computational tools in academic and commercial environments, with the intention of promoting and developing these exciting areas of medical technology.