基于迁移学习和卷积神经网络的术中荧光图像肿瘤分割技术

IF 1.2 4区 医学 Q3 SURGERY Surgical Innovation Pub Date : 2024-04-15 DOI:10.1177/15533506241246576
Weijia Hou, Liwen Zou, Dong Wang
{"title":"基于迁移学习和卷积神经网络的术中荧光图像肿瘤分割技术","authors":"Weijia Hou, Liwen Zou, Dong Wang","doi":"10.1177/15533506241246576","DOIUrl":null,"url":null,"abstract":"ObjectiveTo propose a transfer learning based method of tumor segmentation in intraoperative fluorescence images, which will assist surgeons to efficiently and accurately identify the boundary of tumors of interest.MethodsWe employed transfer learning and deep convolutional neural networks (DCNNs) for tumor segmentation. Specifically, we first pre-trained four networks on the ImageNet dataset to extract low-level features. Subsequently, we fine-tuned these networks on two fluorescence image datasets (ABFM and DTHP) separately to enhance the segmentation performance of fluorescence images. Finally, we tested the trained models on the DTHL dataset. The performance of this approach was compared and evaluated against DCNNs trained end-to-end and the traditional level-set method.ResultsThe transfer learning-based UNet++ model achieved high segmentation accuracies of 82.17% on the ABFM dataset, 95.61% on the DTHP dataset, and 85.49% on the DTHL test set. For the DTHP dataset, the pre-trained Deeplab v3 + network performed exceptionally well, with a segmentation accuracy of 96.48%. Furthermore, all models achieved segmentation accuracies of over 90% when dealing with the DTHP dataset.ConclusionTo the best of our knowledge, this study explores tumor segmentation on intraoperative fluorescent images for the first time. The results show that compared to traditional methods, deep learning has significant advantages in improving segmentation performance. Transfer learning enables deep learning models to perform better on small-sample fluorescence image data compared to end-to-end training. This discovery provides strong support for surgeons to obtain more reliable and accurate image segmentation results during surgery.","PeriodicalId":22095,"journal":{"name":"Surgical Innovation","volume":null,"pages":null},"PeriodicalIF":1.2000,"publicationDate":"2024-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Tumor Segmentation in Intraoperative Fluorescence Images Based on Transfer Learning and Convolutional Neural Networks\",\"authors\":\"Weijia Hou, Liwen Zou, Dong Wang\",\"doi\":\"10.1177/15533506241246576\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"ObjectiveTo propose a transfer learning based method of tumor segmentation in intraoperative fluorescence images, which will assist surgeons to efficiently and accurately identify the boundary of tumors of interest.MethodsWe employed transfer learning and deep convolutional neural networks (DCNNs) for tumor segmentation. Specifically, we first pre-trained four networks on the ImageNet dataset to extract low-level features. Subsequently, we fine-tuned these networks on two fluorescence image datasets (ABFM and DTHP) separately to enhance the segmentation performance of fluorescence images. Finally, we tested the trained models on the DTHL dataset. The performance of this approach was compared and evaluated against DCNNs trained end-to-end and the traditional level-set method.ResultsThe transfer learning-based UNet++ model achieved high segmentation accuracies of 82.17% on the ABFM dataset, 95.61% on the DTHP dataset, and 85.49% on the DTHL test set. For the DTHP dataset, the pre-trained Deeplab v3 + network performed exceptionally well, with a segmentation accuracy of 96.48%. Furthermore, all models achieved segmentation accuracies of over 90% when dealing with the DTHP dataset.ConclusionTo the best of our knowledge, this study explores tumor segmentation on intraoperative fluorescent images for the first time. The results show that compared to traditional methods, deep learning has significant advantages in improving segmentation performance. Transfer learning enables deep learning models to perform better on small-sample fluorescence image data compared to end-to-end training. This discovery provides strong support for surgeons to obtain more reliable and accurate image segmentation results during surgery.\",\"PeriodicalId\":22095,\"journal\":{\"name\":\"Surgical Innovation\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":1.2000,\"publicationDate\":\"2024-04-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Surgical Innovation\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1177/15533506241246576\",\"RegionNum\":4,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"SURGERY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Surgical Innovation","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1177/15533506241246576","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"SURGERY","Score":null,"Total":0}
引用次数: 0

摘要

目标提出一种基于迁移学习的术中荧光图像肿瘤分割方法,帮助外科医生高效、准确地识别感兴趣肿瘤的边界。方法我们采用迁移学习和深度卷积神经网络(DCNN)进行肿瘤分割。具体来说,我们首先在 ImageNet 数据集上预训练了四个网络,以提取低级特征。随后,我们分别在两个荧光图像数据集(ABFM 和 DTHP)上对这些网络进行微调,以提高荧光图像的分割性能。最后,我们在 DTHL 数据集上测试了训练好的模型。结果基于迁移学习的 UNet++ 模型在 ABFM 数据集上实现了 82.17% 的高分割准确率,在 DTHP 数据集上实现了 95.61% 的高分割准确率,在 DTHL 测试集上实现了 85.49% 的高分割准确率。在 DTHP 数据集上,预训练的 Deeplab v3 + 网络表现出色,分割准确率达到 96.48%。此外,在处理 DTHP 数据集时,所有模型的分割准确率都超过了 90%。 结论 据我们所知,本研究首次探索了术中荧光图像上的肿瘤分割。结果表明,与传统方法相比,深度学习在提高分割性能方面具有显著优势。与端到端训练相比,迁移学习能让深度学习模型在小样本荧光图像数据上表现得更好。这一发现为外科医生在手术过程中获得更可靠、更准确的图像分割结果提供了有力支持。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Tumor Segmentation in Intraoperative Fluorescence Images Based on Transfer Learning and Convolutional Neural Networks
ObjectiveTo propose a transfer learning based method of tumor segmentation in intraoperative fluorescence images, which will assist surgeons to efficiently and accurately identify the boundary of tumors of interest.MethodsWe employed transfer learning and deep convolutional neural networks (DCNNs) for tumor segmentation. Specifically, we first pre-trained four networks on the ImageNet dataset to extract low-level features. Subsequently, we fine-tuned these networks on two fluorescence image datasets (ABFM and DTHP) separately to enhance the segmentation performance of fluorescence images. Finally, we tested the trained models on the DTHL dataset. The performance of this approach was compared and evaluated against DCNNs trained end-to-end and the traditional level-set method.ResultsThe transfer learning-based UNet++ model achieved high segmentation accuracies of 82.17% on the ABFM dataset, 95.61% on the DTHP dataset, and 85.49% on the DTHL test set. For the DTHP dataset, the pre-trained Deeplab v3 + network performed exceptionally well, with a segmentation accuracy of 96.48%. Furthermore, all models achieved segmentation accuracies of over 90% when dealing with the DTHP dataset.ConclusionTo the best of our knowledge, this study explores tumor segmentation on intraoperative fluorescent images for the first time. The results show that compared to traditional methods, deep learning has significant advantages in improving segmentation performance. Transfer learning enables deep learning models to perform better on small-sample fluorescence image data compared to end-to-end training. This discovery provides strong support for surgeons to obtain more reliable and accurate image segmentation results during surgery.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Surgical Innovation
Surgical Innovation 医学-外科
CiteScore
2.90
自引率
0.00%
发文量
72
审稿时长
6-12 weeks
期刊介绍: Surgical Innovation (SRI) is a peer-reviewed bi-monthly journal focusing on minimally invasive surgical techniques, new instruments such as laparoscopes and endoscopes, and new technologies. SRI prepares surgeons to think and work in "the operating room of the future" through learning new techniques, understanding and adapting to new technologies, maintaining surgical competencies, and applying surgical outcomes data to their practices. This journal is a member of the Committee on Publication Ethics (COPE).
期刊最新文献
The Use of the Symani Surgical System® in Emergency Hand Trauma Care. A Prospective Study on a Suture Force Feedback Device for Training and Evaluating Junior Surgeons in Anastomotic Surgical Closure. The Reconstructive Metaverse - Collaboration in Real-Time Shared Mixed Reality Environments for Microsurgical Reconstruction. Patients Engaged in Losing Weight Preoperatively Experience Improved Outcomes After Hiatal Hernia Repair. Metrics for Success in a Surgical Innovation Fellowship.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1