TUNeS: A Temporal U-Net With Self-Attention for Video-Based Surgical Phase Recognition

IF 4.5 2区 医学 Q2 ENGINEERING, BIOMEDICAL IEEE Transactions on Biomedical Engineering Pub Date : 2025-01-28 DOI:10.1109/TBME.2025.3535228
Isabel Funke;Dominik Rivoir;Stefanie Krell;Stefanie Speidel
{"title":"TUNeS: A Temporal U-Net With Self-Attention for Video-Based Surgical Phase Recognition","authors":"Isabel Funke;Dominik Rivoir;Stefanie Krell;Stefanie Speidel","doi":"10.1109/TBME.2025.3535228","DOIUrl":null,"url":null,"abstract":"<italic>Objective:</i> To enable context-aware computer assistance in the operating room of the future, cognitive systems need to understand automatically which surgical phase is being performed by the medical team. The primary source of information for surgical phase recognition is typically video, which presents two challenges: extracting meaningful features from the video stream and effectively modeling temporal information in the sequence of visual features. <italic>Methods:</i> For temporal modeling, attention mechanisms have gained popularity due to their ability to capture long-range dependencies. In this paper, we explore design choices for attention in existing temporal models for surgical phase recognition and propose a novel approach that uses attention more effectively and does not require hand-crafted constraints: TUNeS, an efficient and simple temporal model that incorporates self-attention at the core of a convolutional U-Net structure. In addition, we propose to train the feature extractor, a standard CNN, together with an LSTM on preferably long video segments, i.e., with long temporal context. <italic>Results:</i> In our experiments, almost all temporal models performed better on top of feature extractors that were trained with longer temporal context. On these contextualized features, TUNeS achieves state-of-the-art results on the Cholec80 dataset. <italic>Conclusion:</i> This study offers new insights on how to use attention mechanisms to build accurate and efficient temporal models for surgical phase recognition. <italic>Significance:</i> Implementing automatic surgical phase recognition is essential to automate the analysis and optimization of surgical workflows and to enable context-aware computer assistance during surgery, thus ultimately improving patient care.","PeriodicalId":13245,"journal":{"name":"IEEE Transactions on Biomedical Engineering","volume":"72 7","pages":"2105-2119"},"PeriodicalIF":4.5000,"publicationDate":"2025-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Biomedical Engineering","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10856322/","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
引用次数: 0

Abstract

Objective: To enable context-aware computer assistance in the operating room of the future, cognitive systems need to understand automatically which surgical phase is being performed by the medical team. The primary source of information for surgical phase recognition is typically video, which presents two challenges: extracting meaningful features from the video stream and effectively modeling temporal information in the sequence of visual features. Methods: For temporal modeling, attention mechanisms have gained popularity due to their ability to capture long-range dependencies. In this paper, we explore design choices for attention in existing temporal models for surgical phase recognition and propose a novel approach that uses attention more effectively and does not require hand-crafted constraints: TUNeS, an efficient and simple temporal model that incorporates self-attention at the core of a convolutional U-Net structure. In addition, we propose to train the feature extractor, a standard CNN, together with an LSTM on preferably long video segments, i.e., with long temporal context. Results: In our experiments, almost all temporal models performed better on top of feature extractors that were trained with longer temporal context. On these contextualized features, TUNeS achieves state-of-the-art results on the Cholec80 dataset. Conclusion: This study offers new insights on how to use attention mechanisms to build accurate and efficient temporal models for surgical phase recognition. Significance: Implementing automatic surgical phase recognition is essential to automate the analysis and optimization of surgical workflows and to enable context-aware computer assistance during surgery, thus ultimately improving patient care.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
TUNeS:用于基于视频的手术阶段识别的具有自注意力的时态 U 网。
目的:为了在未来的手术室中实现上下文感知计算机辅助,认知系统需要自动理解医疗团队正在进行的手术阶段。手术相位识别的主要信息来源通常是视频,这带来了两个挑战:从视频流中提取有意义的特征,以及在视觉特征序列中有效地建模时间信息。方法:对于时间建模,注意机制因其捕获长期依赖关系的能力而受到欢迎。在本文中,我们探索了现有手术阶段识别时间模型中注意力的设计选择,并提出了一种更有效地使用注意力且不需要手工制作约束的新方法:TUNeS,一种高效且简单的时间模型,将自我注意力纳入卷积U-Net结构的核心。此外,我们建议在较长的视频片段(即长时间上下文)上训练特征提取器(标准CNN)和LSTM。结果:在我们的实验中,几乎所有的时间模型都在用更长的时间上下文训练的特征提取器上表现得更好。在这些情境化的特征上,TUNeS在Cholec80数据集上实现了最先进的结果。结论:本研究为如何利用注意机制建立准确高效的手术相识别时间模型提供了新的思路。意义:实现自动手术阶段识别对于外科工作流程的自动化分析和优化以及在手术过程中实现上下文感知计算机辅助至关重要,从而最终改善患者护理。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
IEEE Transactions on Biomedical Engineering
IEEE Transactions on Biomedical Engineering 工程技术-工程:生物医学
CiteScore
9.40
自引率
4.30%
发文量
880
审稿时长
2.5 months
期刊介绍: IEEE Transactions on Biomedical Engineering contains basic and applied papers dealing with biomedical engineering. Papers range from engineering development in methods and techniques with biomedical applications to experimental and clinical investigations with engineering contributions.
期刊最新文献
A Birdcage Volume Transmit Coil and 8 Channel Receive Array for Marmoset Brain Imaging at 7T. A Hybrid Distributed Capacitance Birdcage Coil for Small-Animal MR Imaging at 14.1 T. GONet: A Generalizable Deep Learning Model for Glaucoma Detection. Phase Correction of MR Spectroscopic Imaging Data Using Model-Based Signal Estimation and Extrapolation. Continuous Wrist Angle Estimation Under Different Resistance Based on Dynamic EMG Decomposition.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1