基于工作流识别的垂体内窥镜手术视频操作笔记的自动生成

Adrito Das , Danyal Z. Khan , John G. Hanrahan , Hani J. Marcus , Danail Stoyanov
{"title":"基于工作流识别的垂体内窥镜手术视频操作笔记的自动生成","authors":"Adrito Das ,&nbsp;Danyal Z. Khan ,&nbsp;John G. Hanrahan ,&nbsp;Hani J. Marcus ,&nbsp;Danail Stoyanov","doi":"10.1016/j.ibmed.2023.100107","DOIUrl":null,"url":null,"abstract":"<div><p>Operation notes are a crucial component of patient care. However, writing them manually is prone to human error, particularly in high pressured clinical environments. Automatic generation of operation notes from video recordings can alleviate some of the administrative burdens, improve accuracy, and provide additional information. To achieve this for endoscopic pituitary surgery, 27-steps were identified via expert consensus. Then, for the 97-videos recorded for this study, a timestamp of each step was annotated by an expert surgeon. To automatically determine whether a step is present in a video, a three-stage architecture was created. Firstly, for each step, a convolution neural network was used for binary image classification on each frame of a video. Secondly, for each step, the binary frame classifications were passed to a discriminator for binary video classification. Thirdly, for each video, the binary video classifications were passed to an accumulator for multi-label step classification. The architecture was trained on 77-videos, and tested on 20-videos, where a 0.80 weighted-<em>F</em><sub>1</sub> score was achieved. The classifications were inputted into a clinically based predefined template, and further enriched with additional video analytics. This work therefore demonstrates automatic generation of operative notes from surgical videos is feasible, and can assist surgeons during documentation.</p></div>","PeriodicalId":73399,"journal":{"name":"Intelligence-based medicine","volume":"8 ","pages":"Article 100107"},"PeriodicalIF":0.0000,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Automatic generation of operation notes in endoscopic pituitary surgery videos using workflow recognition\",\"authors\":\"Adrito Das ,&nbsp;Danyal Z. Khan ,&nbsp;John G. Hanrahan ,&nbsp;Hani J. Marcus ,&nbsp;Danail Stoyanov\",\"doi\":\"10.1016/j.ibmed.2023.100107\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Operation notes are a crucial component of patient care. However, writing them manually is prone to human error, particularly in high pressured clinical environments. Automatic generation of operation notes from video recordings can alleviate some of the administrative burdens, improve accuracy, and provide additional information. To achieve this for endoscopic pituitary surgery, 27-steps were identified via expert consensus. Then, for the 97-videos recorded for this study, a timestamp of each step was annotated by an expert surgeon. To automatically determine whether a step is present in a video, a three-stage architecture was created. Firstly, for each step, a convolution neural network was used for binary image classification on each frame of a video. Secondly, for each step, the binary frame classifications were passed to a discriminator for binary video classification. Thirdly, for each video, the binary video classifications were passed to an accumulator for multi-label step classification. The architecture was trained on 77-videos, and tested on 20-videos, where a 0.80 weighted-<em>F</em><sub>1</sub> score was achieved. The classifications were inputted into a clinically based predefined template, and further enriched with additional video analytics. This work therefore demonstrates automatic generation of operative notes from surgical videos is feasible, and can assist surgeons during documentation.</p></div>\",\"PeriodicalId\":73399,\"journal\":{\"name\":\"Intelligence-based medicine\",\"volume\":\"8 \",\"pages\":\"Article 100107\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Intelligence-based medicine\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2666521223000212\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Intelligence-based medicine","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2666521223000212","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

手术记录是患者护理的重要组成部分。然而,手动编写它们很容易出现人为错误,尤其是在高压临床环境中。从视频记录中自动生成操作说明可以减轻一些管理负担,提高准确性,并提供额外信息。为了在垂体内窥镜手术中实现这一点,通过专家共识确定了27个步骤。然后,对于为这项研究录制的97个视频,由专家外科医生对每个步骤的时间戳进行注释。为了自动确定视频中是否存在步骤,创建了一个三阶段架构。首先,对于每一步,使用卷积神经网络对视频的每一帧进行二值图像分类。其次,对于每个步骤,将二进制帧分类传递给用于二进制视频分类的鉴别器。第三,对于每个视频,将二进制视频分类传递给累加器进行多标签步骤分类。该体系结构在77个视频上进行了训练,并在20个视频中进行了测试,获得了0.80的F1分数。分类被输入到基于临床的预定义模板中,并通过额外的视频分析进一步丰富。因此,这项工作表明,从手术视频中自动生成手术记录是可行的,并且可以在记录过程中为外科医生提供帮助。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Automatic generation of operation notes in endoscopic pituitary surgery videos using workflow recognition

Operation notes are a crucial component of patient care. However, writing them manually is prone to human error, particularly in high pressured clinical environments. Automatic generation of operation notes from video recordings can alleviate some of the administrative burdens, improve accuracy, and provide additional information. To achieve this for endoscopic pituitary surgery, 27-steps were identified via expert consensus. Then, for the 97-videos recorded for this study, a timestamp of each step was annotated by an expert surgeon. To automatically determine whether a step is present in a video, a three-stage architecture was created. Firstly, for each step, a convolution neural network was used for binary image classification on each frame of a video. Secondly, for each step, the binary frame classifications were passed to a discriminator for binary video classification. Thirdly, for each video, the binary video classifications were passed to an accumulator for multi-label step classification. The architecture was trained on 77-videos, and tested on 20-videos, where a 0.80 weighted-F1 score was achieved. The classifications were inputted into a clinically based predefined template, and further enriched with additional video analytics. This work therefore demonstrates automatic generation of operative notes from surgical videos is feasible, and can assist surgeons during documentation.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Intelligence-based medicine
Intelligence-based medicine Health Informatics
CiteScore
5.00
自引率
0.00%
发文量
0
审稿时长
187 days
期刊最新文献
Artificial intelligence in child development monitoring: A systematic review on usage, outcomes and acceptance Automatic characterization of cerebral MRI images for the detection of autism spectrum disorders DOTnet 2.0: Deep learning network for diffuse optical tomography image reconstruction Artificial intelligence in child development monitoring: A systematic review on usage, outcomes and acceptance Clustering polycystic ovary syndrome laboratory results extracted from a large internet forum with machine learning
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1