使用人类前列腺数据集和术前有限的动物图像进行动物前列腺的跨物种分割:狗前列腺组织抽样实验

IF 3 4区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC International Journal of Imaging Systems and Technology Pub Date : 2024-07-10 DOI:10.1002/ima.23138
Yang Yang, Seong Young Ko
{"title":"使用人类前列腺数据集和术前有限的动物图像进行动物前列腺的跨物种分割:狗前列腺组织抽样实验","authors":"Yang Yang,&nbsp;Seong Young Ko","doi":"10.1002/ima.23138","DOIUrl":null,"url":null,"abstract":"<div>\n \n <p>In the development of medical devices and surgical robot systems, animal models are often used for evaluation, necessitating accurate organ segmentation. Deep learning-based image segmentation provides a solution for automatic and precise organ segmentation. However, a significant challenge in this approach arises from the limited availability of training data for animal models. In contrast, human medical image datasets are readily available. To address this imbalance, this study proposes a fine-tuning approach that combines a limited set of animal model images with a comprehensive human image dataset. Various postprocessing algorithms were applied to ensure that the segmentation results met the positioning requirements for the evaluation of a medical robot under development. As one of the target applications, magnetic resonance images were used to determine the position of the dog's prostate, which was then used to determine the target location of the robot under development. The MSD TASK5 dataset was used as the human dataset for pretraining, which involved a modified U-Net network. Ninety-nine pretrained backbone networks were tested as encoders for U-Net. The cross-training validation was performed using the selected network backbone. The highest accuracy, with an IoU score of 0.949, was achieved using the independent validation set from the MSD TASK5 human dataset. Subsequently, fine-tuning was performed using a small set of dog prostate images, resulting in the highest accuracy of an IoU score of 0.961 across different cross-validation groups. The processed results demonstrate the feasibility of the proposed approach for accurate prostate segmentation.</p>\n </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"34 4","pages":""},"PeriodicalIF":3.0000,"publicationDate":"2024-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Cross-Species Segmentation of Animal Prostate Using a Human Prostate Dataset and Limited Preoperative Animal Images: A Sampled Experiment on Dog Prostate Tissue\",\"authors\":\"Yang Yang,&nbsp;Seong Young Ko\",\"doi\":\"10.1002/ima.23138\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div>\\n \\n <p>In the development of medical devices and surgical robot systems, animal models are often used for evaluation, necessitating accurate organ segmentation. Deep learning-based image segmentation provides a solution for automatic and precise organ segmentation. However, a significant challenge in this approach arises from the limited availability of training data for animal models. In contrast, human medical image datasets are readily available. To address this imbalance, this study proposes a fine-tuning approach that combines a limited set of animal model images with a comprehensive human image dataset. Various postprocessing algorithms were applied to ensure that the segmentation results met the positioning requirements for the evaluation of a medical robot under development. As one of the target applications, magnetic resonance images were used to determine the position of the dog's prostate, which was then used to determine the target location of the robot under development. The MSD TASK5 dataset was used as the human dataset for pretraining, which involved a modified U-Net network. Ninety-nine pretrained backbone networks were tested as encoders for U-Net. The cross-training validation was performed using the selected network backbone. The highest accuracy, with an IoU score of 0.949, was achieved using the independent validation set from the MSD TASK5 human dataset. Subsequently, fine-tuning was performed using a small set of dog prostate images, resulting in the highest accuracy of an IoU score of 0.961 across different cross-validation groups. The processed results demonstrate the feasibility of the proposed approach for accurate prostate segmentation.</p>\\n </div>\",\"PeriodicalId\":14027,\"journal\":{\"name\":\"International Journal of Imaging Systems and Technology\",\"volume\":\"34 4\",\"pages\":\"\"},\"PeriodicalIF\":3.0000,\"publicationDate\":\"2024-07-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Journal of Imaging Systems and Technology\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1002/ima.23138\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Imaging Systems and Technology","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/ima.23138","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

摘要

在医疗设备和手术机器人系统的开发过程中,通常会使用动物模型进行评估,这就需要进行精确的器官分割。基于深度学习的图像分割为自动和精确的器官分割提供了一种解决方案。然而,这种方法面临的一个重大挑战是动物模型的训练数据有限。相比之下,人类医学图像数据集却很容易获得。为了解决这一不平衡问题,本研究提出了一种微调方法,将有限的动物模型图像集与全面的人类图像数据集结合起来。为确保分割结果符合正在开发中的医疗机器人评估所需的定位要求,我们采用了各种后处理算法。作为目标应用之一,磁共振图像被用来确定狗的前列腺位置,然后用来确定正在开发的机器人的目标位置。MSD TASK5 数据集被用作预训练的人类数据集,其中涉及一个改进的 U-Net 网络。作为 U-Net 的编码器,对 99 个预训练的骨干网络进行了测试。交叉训练验证使用选定的骨干网络进行。使用来自 MSD TASK5 人类数据集的独立验证集取得了最高的准确率,IoU 得分为 0.949。随后,使用一小组狗前列腺图像进行了微调,结果在不同的交叉验证组中获得了最高的准确率,IoU 得分为 0.961。处理结果证明了所提方法在准确分割前列腺方面的可行性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Cross-Species Segmentation of Animal Prostate Using a Human Prostate Dataset and Limited Preoperative Animal Images: A Sampled Experiment on Dog Prostate Tissue

In the development of medical devices and surgical robot systems, animal models are often used for evaluation, necessitating accurate organ segmentation. Deep learning-based image segmentation provides a solution for automatic and precise organ segmentation. However, a significant challenge in this approach arises from the limited availability of training data for animal models. In contrast, human medical image datasets are readily available. To address this imbalance, this study proposes a fine-tuning approach that combines a limited set of animal model images with a comprehensive human image dataset. Various postprocessing algorithms were applied to ensure that the segmentation results met the positioning requirements for the evaluation of a medical robot under development. As one of the target applications, magnetic resonance images were used to determine the position of the dog's prostate, which was then used to determine the target location of the robot under development. The MSD TASK5 dataset was used as the human dataset for pretraining, which involved a modified U-Net network. Ninety-nine pretrained backbone networks were tested as encoders for U-Net. The cross-training validation was performed using the selected network backbone. The highest accuracy, with an IoU score of 0.949, was achieved using the independent validation set from the MSD TASK5 human dataset. Subsequently, fine-tuning was performed using a small set of dog prostate images, resulting in the highest accuracy of an IoU score of 0.961 across different cross-validation groups. The processed results demonstrate the feasibility of the proposed approach for accurate prostate segmentation.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
International Journal of Imaging Systems and Technology
International Journal of Imaging Systems and Technology 工程技术-成像科学与照相技术
CiteScore
6.90
自引率
6.10%
发文量
138
审稿时长
3 months
期刊介绍: The International Journal of Imaging Systems and Technology (IMA) is a forum for the exchange of ideas and results relevant to imaging systems, including imaging physics and informatics. The journal covers all imaging modalities in humans and animals. IMA accepts technically sound and scientifically rigorous research in the interdisciplinary field of imaging, including relevant algorithmic research and hardware and software development, and their applications relevant to medical research. The journal provides a platform to publish original research in structural and functional imaging. The journal is also open to imaging studies of the human body and on animals that describe novel diagnostic imaging and analyses methods. Technical, theoretical, and clinical research in both normal and clinical populations is encouraged. Submissions describing methods, software, databases, replication studies as well as negative results are also considered. The scope of the journal includes, but is not limited to, the following in the context of biomedical research: Imaging and neuro-imaging modalities: structural MRI, functional MRI, PET, SPECT, CT, ultrasound, EEG, MEG, NIRS etc.; Neuromodulation and brain stimulation techniques such as TMS and tDCS; Software and hardware for imaging, especially related to human and animal health; Image segmentation in normal and clinical populations; Pattern analysis and classification using machine learning techniques; Computational modeling and analysis; Brain connectivity and connectomics; Systems-level characterization of brain function; Neural networks and neurorobotics; Computer vision, based on human/animal physiology; Brain-computer interface (BCI) technology; Big data, databasing and data mining.
期刊最新文献
Unveiling Cancer: A Data-Driven Approach for Early Identification and Prediction Using F-RUS-RF Model Predicting the Early Detection of Breast Cancer Using Hybrid Machine Learning Systems and Thermographic Imaging CATNet: A Cross Attention and Texture-Aware Network for Polyp Segmentation VMC-UNet: A Vision Mamba-CNN U-Net for Tumor Segmentation in Breast Ultrasound Image Suppression of the Tissue Component With the Total Least-Squares Algorithm to Improve Second Harmonic Imaging of Ultrasound Contrast Agents
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1