Arthroscopic scene segmentation using multispectral reconstructed frames and deep learning

IF 4.4 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Intelligent medicine Pub Date : 2023-11-01 DOI:10.1016/j.imed.2022.10.006
Shahnewaz Ali, Ross Crawford, Ajay K. Pandey
{"title":"Arthroscopic scene segmentation using multispectral reconstructed frames and deep learning","authors":"Shahnewaz Ali,&nbsp;Ross Crawford,&nbsp;Ajay K. Pandey","doi":"10.1016/j.imed.2022.10.006","DOIUrl":null,"url":null,"abstract":"<div><h3>Background</h3><p>Knee arthroscopy is one of the most complex minimally invasive surgeries, and it is routinely performed to treat a range of ailments and injuries to the knee joint. Its complex ergonomic design imposes visualization and navigation constraints, consequently leading to unintended tissue damage and a steep learning curve before surgeons gain proficiency. The lack of robust visual texture and landmark frame features further limits the success of image-guided approaches to knee arthroscopy Feature- and texture-less tissue structures of knee anatomy, lighting conditions, noise, blur, debris, lack of accurate ground-truth label, tissue degeneration, and injury make semantic segmentation an extremely challenging task. To address this complex research problem, this study reported the utility of reconstructed surface reflectance as a viable piece of information that could be used with cutting-edge deep learning technique to achieve highly accurate segmented scenes.</p></div><div><h3>Methods</h3><p>We proposed an intraoperative, two-tier deep learning method that makes full use of tissue reflectance information present within an RGB frame to segment texture-less images into multiple tissue types from knee arthroscopy video frames. This study included several cadaver knees experiments at the Medical and Engineering Research Facility, located within the Prince Charles Hospital campus, Brisbane Queensland. Data were collected from a total of five cadaver knees, three were males and one from a female. The age range of the donors was 56–93 years. Aging-related tissue degeneration and some anterior cruciate ligament injury were observed in most cadaver knees. An arthroscopic image dataset was created and subsequently labeled by clinical experts. This study also included validation of a prototype stereo arthroscope, along with conventional arthroscope, to attain larger field of view and stereo vision. We reconstructed surface reflectance from camera responses that exhibited distinct spatial features at different wavelengths ranging from 380 to 730 nm in the RGB spectrum. Toward the aim to segment texture-less tissue types, this data was used within a two-stage deep learning model.</p></div><div><h3>Results</h3><p>The accuracy of the network was measured using dice coefficient score. The average segmentation accuracy for the tissue-type articular cruciate ligament (ACL) was 0.6625, for the tissue-type bone was 0.84, and for the tissue-type meniscus was 0.565. For the analysis, we excluded extremely poor quality of frames. Here, a frame is considered extremely poor quality when more than 50% of any tissue structures are over- or underexposed due to nonuniform light exposure. Additionally, when only high quality of frames was considered during the training and validation stage, the average bone segmentation accuracy improved to 0.92 and the average ACL segmentation accuracy reached 0.73. These two tissue types, namely, femur bone and ACL, have a high importance in arthroscopy for tissue tracking. Comparatively, the previous work based on RGB data achieved a much lower average accuracy for femur, tibia, ACL, and meniscus of 0.78, 0.50, 0.41, and 0.43 using U-Net and 0.79, 0.50, 0.51, and 0.48 using U-Net++. From this analysis, it is clear that our multispectral method outperforms the previously proposed methods and delivers a much better solution in achieving automatic arthroscopic scene segmentation.</p></div><div><h3>Conclusion</h3><p>The method was based on the deep learning model and requires a reconstructed surface reflectance. It could provide tissue awareness in an intraoperative manner that has a high potential to improve surgical precisions. It could be applied to other minimally invasive surgeries as an online segmentation tool for training, aiding, and guiding the surgeons as well as image-guided surgeries.</p></div>","PeriodicalId":73400,"journal":{"name":"Intelligent medicine","volume":"3 4","pages":"Pages 243-251"},"PeriodicalIF":4.4000,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667102623000013/pdfft?md5=1642345886f548549679920e28e75b90&pid=1-s2.0-S2667102623000013-main.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Intelligent medicine","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2667102623000013","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
引用次数: 0

Abstract

Background

Knee arthroscopy is one of the most complex minimally invasive surgeries, and it is routinely performed to treat a range of ailments and injuries to the knee joint. Its complex ergonomic design imposes visualization and navigation constraints, consequently leading to unintended tissue damage and a steep learning curve before surgeons gain proficiency. The lack of robust visual texture and landmark frame features further limits the success of image-guided approaches to knee arthroscopy Feature- and texture-less tissue structures of knee anatomy, lighting conditions, noise, blur, debris, lack of accurate ground-truth label, tissue degeneration, and injury make semantic segmentation an extremely challenging task. To address this complex research problem, this study reported the utility of reconstructed surface reflectance as a viable piece of information that could be used with cutting-edge deep learning technique to achieve highly accurate segmented scenes.

Methods

We proposed an intraoperative, two-tier deep learning method that makes full use of tissue reflectance information present within an RGB frame to segment texture-less images into multiple tissue types from knee arthroscopy video frames. This study included several cadaver knees experiments at the Medical and Engineering Research Facility, located within the Prince Charles Hospital campus, Brisbane Queensland. Data were collected from a total of five cadaver knees, three were males and one from a female. The age range of the donors was 56–93 years. Aging-related tissue degeneration and some anterior cruciate ligament injury were observed in most cadaver knees. An arthroscopic image dataset was created and subsequently labeled by clinical experts. This study also included validation of a prototype stereo arthroscope, along with conventional arthroscope, to attain larger field of view and stereo vision. We reconstructed surface reflectance from camera responses that exhibited distinct spatial features at different wavelengths ranging from 380 to 730 nm in the RGB spectrum. Toward the aim to segment texture-less tissue types, this data was used within a two-stage deep learning model.

Results

The accuracy of the network was measured using dice coefficient score. The average segmentation accuracy for the tissue-type articular cruciate ligament (ACL) was 0.6625, for the tissue-type bone was 0.84, and for the tissue-type meniscus was 0.565. For the analysis, we excluded extremely poor quality of frames. Here, a frame is considered extremely poor quality when more than 50% of any tissue structures are over- or underexposed due to nonuniform light exposure. Additionally, when only high quality of frames was considered during the training and validation stage, the average bone segmentation accuracy improved to 0.92 and the average ACL segmentation accuracy reached 0.73. These two tissue types, namely, femur bone and ACL, have a high importance in arthroscopy for tissue tracking. Comparatively, the previous work based on RGB data achieved a much lower average accuracy for femur, tibia, ACL, and meniscus of 0.78, 0.50, 0.41, and 0.43 using U-Net and 0.79, 0.50, 0.51, and 0.48 using U-Net++. From this analysis, it is clear that our multispectral method outperforms the previously proposed methods and delivers a much better solution in achieving automatic arthroscopic scene segmentation.

Conclusion

The method was based on the deep learning model and requires a reconstructed surface reflectance. It could provide tissue awareness in an intraoperative manner that has a high potential to improve surgical precisions. It could be applied to other minimally invasive surgeries as an online segmentation tool for training, aiding, and guiding the surgeons as well as image-guided surgeries.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
使用多光谱重建帧和深度学习的关节镜场景分割
背景膝关节镜手术是最复杂的微创手术之一,通常用于治疗膝关节的各种疾病和损伤。其复杂的人体工程学设计对可视化和导航造成了限制,从而导致意外的组织损伤,外科医生在熟练掌握之前需要经历一段陡峭的学习曲线。由于缺乏强大的视觉纹理和地标框架特征,进一步限制了膝关节镜图像引导方法的成功 膝关节解剖结构中缺乏特征和纹理的组织结构、光照条件、噪声、模糊、碎片、缺乏准确的地面实况标签、组织变性和损伤,使得语义分割成为一项极具挑战性的任务。为了解决这一复杂的研究问题,本研究报告了重建表面反射率作为一种可行的信息,可与前沿的深度学习技术配合使用,实现高精度的场景分割。方法我们提出了一种术中双层深度学习方法,充分利用 RGB 帧中的组织反射率信息,将无纹理图像分割为膝关节镜视频帧中的多种组织类型。这项研究包括在昆士兰州布里斯班查尔斯王子医院校园内的医学与工程研究设施进行的几项尸体膝关节实验。共收集了五个尸体膝盖的数据,其中三个是男性,一个是女性。捐献者的年龄范围为 56-93 岁。在大多数尸体膝关节中都观察到了与衰老相关的组织退化和一些前十字韧带损伤。建立了关节镜图像数据集,随后由临床专家进行标注。这项研究还包括对立体关节镜原型和传统关节镜的验证,以获得更大的视野和立体视觉。我们从相机响应中重建了表面反射率,这些反射率在 RGB 光谱的 380 到 730 纳米不同波长范围内表现出明显的空间特征。为了分割无纹理的组织类型,我们在两阶段深度学习模型中使用了这些数据。组织类型关节十字韧带(ACL)的平均分割准确率为 0.6625,组织类型骨骼的平均分割准确率为 0.84,组织类型半月板的平均分割准确率为 0.565。在分析中,我们排除了质量极差的帧。在这里,如果由于光线照射不均匀导致超过 50%的组织结构曝光过度或不足,则该帧被视为质量极差。此外,在训练和验证阶段只考虑高质量帧时,平均骨骼分割准确率提高到 0.92,平均前交叉韧带分割准确率达到 0.73。股骨头和十字韧带这两种组织类型在关节镜组织追踪中具有重要意义。相比之下,之前基于 RGB 数据的工作在股骨、胫骨、前交叉韧带和半月板方面的平均准确率要低得多,使用 U-Net 时分别为 0.78、0.50、0.41 和 0.43,使用 U-Net++ 时分别为 0.79、0.50、0.51 和 0.48。从以上分析可以看出,我们的多光谱方法优于之前提出的方法,在实现关节镜场景自动分割方面提供了更好的解决方案。它可以在术中提供组织感知,极有可能提高手术精确度。它可以作为在线分割工具应用于其他微创手术,用于培训、辅助和指导外科医生以及图像引导手术。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Intelligent medicine
Intelligent medicine Surgery, Radiology and Imaging, Artificial Intelligence, Biomedical Engineering
CiteScore
5.20
自引率
0.00%
发文量
19
期刊最新文献
Impact of data balancing a multiclass dataset before the creation of association rules to study bacterial vaginosis Neuropsychological detection and prediction using machine learning algorithms: a comprehensive review Improved neurological diagnoses and treatment strategies via automated human brain tissue segmentation from clinical magnetic resonance imaging Increasing the accuracy and reproducibility of positron emission tomography radiomics for predicting pelvic lymph node metastasis in patients with cervical cancer using 3D local binary pattern-based texture features A clinical decision support system using rough set theory and machine learning for disease prediction
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1