Deep learning-based surgical phase recognition in laparoscopic cholecystectomy.

IF 1.1 Q4 GASTROENTEROLOGY & HEPATOLOGY Annals of hepato-biliary-pancreatic surgery Pub Date : 2024-07-29 DOI:10.14701/ahbps.24-091
Hye Yeon Yang, Seung Soo Hong, Jihun Yoon, Bokyung Park, Youngno Yoon, Dai Hoon Han, Gi Hong Choi, Min-Kook Choi, Sung Hyun Kim
{"title":"Deep learning-based surgical phase recognition in laparoscopic cholecystectomy.","authors":"Hye Yeon Yang, Seung Soo Hong, Jihun Yoon, Bokyung Park, Youngno Yoon, Dai Hoon Han, Gi Hong Choi, Min-Kook Choi, Sung Hyun Kim","doi":"10.14701/ahbps.24-091","DOIUrl":null,"url":null,"abstract":"<p><strong>Backgrounds/aims: </strong>Artificial intelligence (AI) technology has been used to assess surgery quality, educate, and evaluate surgical performance using video recordings in the minimally invasive surgery era. Much attention has been paid to automating surgical workflow analysis from surgical videos for an effective evaluation to achieve the assessment and evaluation. This study aimed to design a deep learning model to automatically identify surgical phases using laparoscopic cholecystectomy videos and automatically assess the accuracy of recognizing surgical phases.</p><p><strong>Methods: </strong>One hundred and twenty cholecystectomy videos from a public dataset (Cholec80) and 40 laparoscopic cholecystectomy videos recorded between July 2022 and December 2022 at a single institution were collected. These datasets were split into training and testing datasets for the AI model at a 2:1 ratio. Test scenarios were constructed according to structural characteristics of the trained model. No pre- or post-processing of input data or inference output was performed to accurately analyze the effect of the label on model training.</p><p><strong>Results: </strong>A total of 98,234 frames were extracted from 40 cases as test data. The overall accuracy of the model was 91.2%. The most accurate phase was Calot's triangle dissection (F1 score: 0.9421), whereas the least accurate phase was clipping and cutting (F1 score: 0.7761).</p><p><strong>Conclusions: </strong>Our AI model identified phases of laparoscopic cholecystectomy with a high accuracy.</p>","PeriodicalId":72220,"journal":{"name":"Annals of hepato-biliary-pancreatic surgery","volume":null,"pages":null},"PeriodicalIF":1.1000,"publicationDate":"2024-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Annals of hepato-biliary-pancreatic surgery","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.14701/ahbps.24-091","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"GASTROENTEROLOGY & HEPATOLOGY","Score":null,"Total":0}
引用次数: 0

Abstract

Backgrounds/aims: Artificial intelligence (AI) technology has been used to assess surgery quality, educate, and evaluate surgical performance using video recordings in the minimally invasive surgery era. Much attention has been paid to automating surgical workflow analysis from surgical videos for an effective evaluation to achieve the assessment and evaluation. This study aimed to design a deep learning model to automatically identify surgical phases using laparoscopic cholecystectomy videos and automatically assess the accuracy of recognizing surgical phases.

Methods: One hundred and twenty cholecystectomy videos from a public dataset (Cholec80) and 40 laparoscopic cholecystectomy videos recorded between July 2022 and December 2022 at a single institution were collected. These datasets were split into training and testing datasets for the AI model at a 2:1 ratio. Test scenarios were constructed according to structural characteristics of the trained model. No pre- or post-processing of input data or inference output was performed to accurately analyze the effect of the label on model training.

Results: A total of 98,234 frames were extracted from 40 cases as test data. The overall accuracy of the model was 91.2%. The most accurate phase was Calot's triangle dissection (F1 score: 0.9421), whereas the least accurate phase was clipping and cutting (F1 score: 0.7761).

Conclusions: Our AI model identified phases of laparoscopic cholecystectomy with a high accuracy.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于深度学习的腹腔镜胆囊切除术手术阶段识别。
背景/目的:在微创手术时代,人工智能(AI)技术已被用于利用视频记录评估手术质量、教育和评价手术表现。为实现评估和评价的有效评估,人们非常关注从手术视频中自动分析手术工作流程。本研究旨在设计一种深度学习模型,利用腹腔镜胆囊切除术视频自动识别手术阶段,并自动评估识别手术阶段的准确性:从公共数据集(Cholec80)中收集了120个胆囊切除术视频,并收集了一个机构在2022年7月至2022年12月期间录制的40个腹腔镜胆囊切除术视频。这些数据集按 2:1 的比例分为人工智能模型的训练数据集和测试数据集。测试场景是根据训练模型的结构特征构建的。没有对输入数据或推理输出进行预处理或后处理,以准确分析标签对模型训练的影响:从 40 个案例中共提取了 98 234 个帧作为测试数据。模型的总体准确率为 91.2%。最准确的阶段是卡洛氏三角解剖(F1 得分:0.9421),而最不准确的阶段是剪切和切割(F1 得分:0.7761):我们的人工智能模型能准确识别腹腔镜胆囊切除术的各个阶段。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
0.80
自引率
0.00%
发文量
0
期刊最新文献
Three-dimensional printing of intra-abdominal cavity to prevent large-for-size syndrome in liver transplantation: Correspondence. A rare case of a large solid pseudopapillary neoplasm with extensive liver metastasis. Improved graft survival by using three-dimensional printing of intra-abdominal cavity to prevent large-for-size syndrome in liver transplantation. ArtiSential® laparoscopic cholecystectomy versus single-fulcrum laparoscopic cholecystectomy: Which minimally invasive surgery is better? Minimally invasive pancreatoduodenectomy with combined venous vascular resection: A comparative analysis with open approach.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1