{"title":"Deep Learning-Based Localization and Orientation Estimation of Pedicle Screws in Spinal Fusion Surgery.","authors":"Kwang Hyeon Kim, Hae-Won Koo, Byung-Jou Lee","doi":"10.13004/kjnt.2024.20.e17","DOIUrl":null,"url":null,"abstract":"<p><strong>Objective: </strong>This study investigated the application of a deep learning-based object detection model for accurate localization and orientation estimation of spinal fixation surgical instruments during surgery.</p><p><strong>Methods: </strong>We employed the You Only Look Once (YOLO) object detection framework with oriented bounding boxes (OBBs) to address the challenge of non-axis-aligned instruments in surgical scenes. The initial dataset of 100 images was created using brochure and website images from 11 manufacturers of commercially available pedicle screws used in spinal fusion surgeries, and data augmentation was used to expand 300 images. The model was trained, validated, and tested using 70%, 20%, and 10% of the images of lumbar pedicle screws, with the training process running for 100 epochs.</p><p><strong>Results: </strong>The model testing results showed that it could detect the locations of the pedicle screws in the surgical scene as well as their direction angles through the OBBs. The F1 score of the model was 0.86 (precision: 1.00, recall: 0.80) at each confidence level and mAP50. The high precision suggests that the model effectively identifies true positive instrument detections, although the recall indicates a slight limitation in capturing all instruments present. This approach offers advantages over traditional object detection in bounding boxes for tasks where object orientation is crucial, and our findings suggest the potential of YOLOv8 OBB models in real-world surgical applications such as instrument tracking and surgical navigation.</p><p><strong>Conclusion: </strong>Future work will explore incorporating additional data and the potential of hyperparameter optimization to improve overall model performance.</p>","PeriodicalId":36879,"journal":{"name":"Korean Journal of Neurotrauma","volume":"20 2","pages":"90-100"},"PeriodicalIF":0.0000,"publicationDate":"2024-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11249586/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Korean Journal of Neurotrauma","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.13004/kjnt.2024.20.e17","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/6/1 0:00:00","PubModel":"eCollection","JCR":"Q3","JCRName":"Medicine","Score":null,"Total":0}
引用次数: 0
Abstract
Objective: This study investigated the application of a deep learning-based object detection model for accurate localization and orientation estimation of spinal fixation surgical instruments during surgery.
Methods: We employed the You Only Look Once (YOLO) object detection framework with oriented bounding boxes (OBBs) to address the challenge of non-axis-aligned instruments in surgical scenes. The initial dataset of 100 images was created using brochure and website images from 11 manufacturers of commercially available pedicle screws used in spinal fusion surgeries, and data augmentation was used to expand 300 images. The model was trained, validated, and tested using 70%, 20%, and 10% of the images of lumbar pedicle screws, with the training process running for 100 epochs.
Results: The model testing results showed that it could detect the locations of the pedicle screws in the surgical scene as well as their direction angles through the OBBs. The F1 score of the model was 0.86 (precision: 1.00, recall: 0.80) at each confidence level and mAP50. The high precision suggests that the model effectively identifies true positive instrument detections, although the recall indicates a slight limitation in capturing all instruments present. This approach offers advantages over traditional object detection in bounding boxes for tasks where object orientation is crucial, and our findings suggest the potential of YOLOv8 OBB models in real-world surgical applications such as instrument tracking and surgical navigation.
Conclusion: Future work will explore incorporating additional data and the potential of hyperparameter optimization to improve overall model performance.