Deep Learning-Based Localization and Orientation Estimation of Pedicle Screws in Spinal Fusion Surgery.

Q3 Medicine Korean Journal of Neurotrauma Pub Date : 2024-06-17 eCollection Date: 2024-06-01 DOI:10.13004/kjnt.2024.20.e17
Kwang Hyeon Kim, Hae-Won Koo, Byung-Jou Lee
{"title":"Deep Learning-Based Localization and Orientation Estimation of Pedicle Screws in Spinal Fusion Surgery.","authors":"Kwang Hyeon Kim, Hae-Won Koo, Byung-Jou Lee","doi":"10.13004/kjnt.2024.20.e17","DOIUrl":null,"url":null,"abstract":"<p><strong>Objective: </strong>This study investigated the application of a deep learning-based object detection model for accurate localization and orientation estimation of spinal fixation surgical instruments during surgery.</p><p><strong>Methods: </strong>We employed the You Only Look Once (YOLO) object detection framework with oriented bounding boxes (OBBs) to address the challenge of non-axis-aligned instruments in surgical scenes. The initial dataset of 100 images was created using brochure and website images from 11 manufacturers of commercially available pedicle screws used in spinal fusion surgeries, and data augmentation was used to expand 300 images. The model was trained, validated, and tested using 70%, 20%, and 10% of the images of lumbar pedicle screws, with the training process running for 100 epochs.</p><p><strong>Results: </strong>The model testing results showed that it could detect the locations of the pedicle screws in the surgical scene as well as their direction angles through the OBBs. The F1 score of the model was 0.86 (precision: 1.00, recall: 0.80) at each confidence level and mAP50. The high precision suggests that the model effectively identifies true positive instrument detections, although the recall indicates a slight limitation in capturing all instruments present. This approach offers advantages over traditional object detection in bounding boxes for tasks where object orientation is crucial, and our findings suggest the potential of YOLOv8 OBB models in real-world surgical applications such as instrument tracking and surgical navigation.</p><p><strong>Conclusion: </strong>Future work will explore incorporating additional data and the potential of hyperparameter optimization to improve overall model performance.</p>","PeriodicalId":36879,"journal":{"name":"Korean Journal of Neurotrauma","volume":"20 2","pages":"90-100"},"PeriodicalIF":0.0000,"publicationDate":"2024-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11249586/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Korean Journal of Neurotrauma","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.13004/kjnt.2024.20.e17","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/6/1 0:00:00","PubModel":"eCollection","JCR":"Q3","JCRName":"Medicine","Score":null,"Total":0}
引用次数: 0

Abstract

Objective: This study investigated the application of a deep learning-based object detection model for accurate localization and orientation estimation of spinal fixation surgical instruments during surgery.

Methods: We employed the You Only Look Once (YOLO) object detection framework with oriented bounding boxes (OBBs) to address the challenge of non-axis-aligned instruments in surgical scenes. The initial dataset of 100 images was created using brochure and website images from 11 manufacturers of commercially available pedicle screws used in spinal fusion surgeries, and data augmentation was used to expand 300 images. The model was trained, validated, and tested using 70%, 20%, and 10% of the images of lumbar pedicle screws, with the training process running for 100 epochs.

Results: The model testing results showed that it could detect the locations of the pedicle screws in the surgical scene as well as their direction angles through the OBBs. The F1 score of the model was 0.86 (precision: 1.00, recall: 0.80) at each confidence level and mAP50. The high precision suggests that the model effectively identifies true positive instrument detections, although the recall indicates a slight limitation in capturing all instruments present. This approach offers advantages over traditional object detection in bounding boxes for tasks where object orientation is crucial, and our findings suggest the potential of YOLOv8 OBB models in real-world surgical applications such as instrument tracking and surgical navigation.

Conclusion: Future work will explore incorporating additional data and the potential of hyperparameter optimization to improve overall model performance.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
脊柱融合手术中基于深度学习的椎弓根螺钉定位和方向估计。
研究目的本研究调查了基于深度学习的物体检测模型在手术过程中对脊柱固定手术器械进行精确定位和方向估计的应用情况:我们采用了带有定向边界框(OBB)的 "你只看一次"(YOLO)物体检测框架,以解决手术场景中器械非轴对称的难题。最初的 100 张图像数据集是利用 11 家脊柱融合手术中使用的市售椎弓根螺钉制造商的宣传册和网站图像创建的,并利用数据增强技术扩展了 300 张图像。使用 70%、20% 和 10% 的腰椎椎弓根螺钉图像对模型进行了训练、验证和测试,训练过程运行了 100 个历元:结果:模型测试结果表明,它能检测出椎弓根螺钉在手术场景中的位置以及通过 OBB 的方向角。在每个置信度和 mAP50 条件下,模型的 F1 得分为 0.86(精确度:1.00,召回率:0.80)。高精确度表明该模型能有效识别真正的仪器检测结果,尽管召回率表明在捕捉所有存在的仪器方面略有局限。与传统的边界框物体检测相比,这种方法在物体定位至关重要的任务中具有优势,我们的研究结果表明 YOLOv8 OBB 模型在真实世界的手术应用(如器械跟踪和手术导航)中具有潜力:未来的工作将探索纳入更多数据和超参数优化的潜力,以提高模型的整体性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
1.10
自引率
0.00%
发文量
41
期刊最新文献
Letter to the Editor: Commentary on Acute Paraparesis Caused by Spinal Epidural Fluid After Balloon Kyphoplasty for Traumatic Avascular Necrosis: A Case Report (Korean J Neurotrauma 2023;19:398-402). Should Hypertonic Saline Be Considered for the Treatment of Intracranial Hypertension? A Review of Current Evidence and Clinical Practices. Pain Intervention for Osteoporotic Compression Fracture, From Physical Therapy to Surgery: A Literature Review. KJNT Symposium 2024: A Starting Point for a Leap Forward. Feasibility Study of Parkinson's Speech Disorder Evaluation With Pre-Trained Deep Learning Model for Speech-to-Text Analysis.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1