An analysis on the effect of body tissues and surgical tools on workflow recognition in first person surgical videos.

IF 2.3 3区 医学 Q3 ENGINEERING, BIOMEDICAL International Journal of Computer Assisted Radiology and Surgery Pub Date : 2024-11-01 Epub Date: 2024-02-27 DOI:10.1007/s11548-024-03074-6
Hisako Tomita, Naoto Ienaga, Hiroki Kajita, Tetsu Hayashida, Maki Sugimoto
{"title":"An analysis on the effect of body tissues and surgical tools on workflow recognition in first person surgical videos.","authors":"Hisako Tomita, Naoto Ienaga, Hiroki Kajita, Tetsu Hayashida, Maki Sugimoto","doi":"10.1007/s11548-024-03074-6","DOIUrl":null,"url":null,"abstract":"<p><strong>Purpose: </strong>Analysis of operative fields is expected to aid in estimating procedural workflow and evaluating surgeons' procedural skills by considering the temporal transitions during the progression of the surgery. This study aims to propose an automatic recognition system for the procedural workflow by employing machine learning techniques to identify and distinguish elements in the operative field, including body tissues such as fat, muscle, and dermis, along with surgical tools.</p><p><strong>Methods: </strong>We conducted annotations on approximately 908 first-person-view images of breast surgery to facilitate segmentation. The annotated images were used to train a pixel-level classifier based on Mask R-CNN. To assess the impact on procedural workflow recognition, we annotated an additional 43,007 images. The network, structured on the Transformer architecture, was then trained with surgical images incorporating masks for body tissues and surgical tools.</p><p><strong>Results: </strong>The instance segmentation of each body tissue in the segmentation phase provided insights into the trend of area transitions for each tissue. Simultaneously, the spatial features of the surgical tools were effectively captured. In regard to the accuracy of procedural workflow recognition, accounting for body tissues led to an average improvement of 3 % over the baseline. Furthermore, the inclusion of surgical tools yielded an additional increase in accuracy by 4 % compared to the baseline.</p><p><strong>Conclusion: </strong>In this study, we revealed the contribution of the temporal transition of the body tissues and surgical tools spatial features to recognize procedural workflow in first-person-view surgical videos. Body tissues, especially in open surgery, can be a crucial element. This study suggests that further improvements can be achieved by accurately identifying surgical tools specific to each procedural workflow step.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"2195-2202"},"PeriodicalIF":2.3000,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11541397/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Computer Assisted Radiology and Surgery","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1007/s11548-024-03074-6","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/2/27 0:00:00","PubModel":"Epub","JCR":"Q3","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
引用次数: 0

Abstract

Purpose: Analysis of operative fields is expected to aid in estimating procedural workflow and evaluating surgeons' procedural skills by considering the temporal transitions during the progression of the surgery. This study aims to propose an automatic recognition system for the procedural workflow by employing machine learning techniques to identify and distinguish elements in the operative field, including body tissues such as fat, muscle, and dermis, along with surgical tools.

Methods: We conducted annotations on approximately 908 first-person-view images of breast surgery to facilitate segmentation. The annotated images were used to train a pixel-level classifier based on Mask R-CNN. To assess the impact on procedural workflow recognition, we annotated an additional 43,007 images. The network, structured on the Transformer architecture, was then trained with surgical images incorporating masks for body tissues and surgical tools.

Results: The instance segmentation of each body tissue in the segmentation phase provided insights into the trend of area transitions for each tissue. Simultaneously, the spatial features of the surgical tools were effectively captured. In regard to the accuracy of procedural workflow recognition, accounting for body tissues led to an average improvement of 3 % over the baseline. Furthermore, the inclusion of surgical tools yielded an additional increase in accuracy by 4 % compared to the baseline.

Conclusion: In this study, we revealed the contribution of the temporal transition of the body tissues and surgical tools spatial features to recognize procedural workflow in first-person-view surgical videos. Body tissues, especially in open surgery, can be a crucial element. This study suggests that further improvements can be achieved by accurately identifying surgical tools specific to each procedural workflow step.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
分析人体组织和手术工具对第一人称手术视频中工作流程识别的影响。
目的:通过考虑手术过程中的时间转换,手术视野分析有望帮助估算手术流程和评估外科医生的手术技能。本研究旨在通过采用机器学习技术来识别和区分手术视野中的元素,包括脂肪、肌肉和真皮等人体组织以及手术工具,从而提出一种手术流程自动识别系统:我们对大约 908 张第一人称视角的乳腺手术图像进行了注释,以方便分割。注释图像用于训练基于掩膜 R-CNN 的像素级分类器。为了评估对程序性工作流程识别的影响,我们对另外 43,007 张图像进行了标注。然后,使用包含身体组织和手术工具掩码的手术图像对基于 Transformer 架构的网络进行了训练:结果:在分割阶段对每个身体组织进行实例分割,可以深入了解每个组织的区域转换趋势。同时,手术工具的空间特征也得到了有效捕捉。在程序工作流程识别的准确性方面,考虑到身体组织后,比基线平均提高了 3%。此外,加入手术工具后,准确率比基线提高了 4%:在这项研究中,我们揭示了身体组织的时间过渡和手术工具的空间特征对识别第一人称视角手术视频中手术流程的贡献。身体组织,尤其是开放手术中的身体组织,可能是一个关键因素。这项研究表明,通过准确识别每个手术流程步骤所特有的手术工具,可以进一步提高识别率。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
International Journal of Computer Assisted Radiology and Surgery
International Journal of Computer Assisted Radiology and Surgery ENGINEERING, BIOMEDICAL-RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING
CiteScore
5.90
自引率
6.70%
发文量
243
审稿时长
6-12 weeks
期刊介绍: The International Journal for Computer Assisted Radiology and Surgery (IJCARS) is a peer-reviewed journal that provides a platform for closing the gap between medical and technical disciplines, and encourages interdisciplinary research and development activities in an international environment.
期刊最新文献
Avoiding missed opportunities in AI for radiology. Hybrid registration of the fibula for electromagnetically navigated osteotomies in mandibular reconstructive surgery: a phantom study. Validity of a virtual reality-based straight coloanal anastomosis simulator. Computational fluid dynamics and shape analysis enhance aneurysm rupture risk stratification. MINARO DRS: usability study of a robotic-assisted laminectomy.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1