[Reinforcement learning-based method for type B aortic dissection localization].

An Zeng, Xianyang Lin, Jingliang Zhao, Dan Pan, Baoyao Yang, Xin Liu
{"title":"[Reinforcement learning-based method for type B aortic dissection localization].","authors":"An Zeng, Xianyang Lin, Jingliang Zhao, Dan Pan, Baoyao Yang, Xin Liu","doi":"10.7507/1001-5515.202309047","DOIUrl":null,"url":null,"abstract":"<p><p>In the segmentation of aortic dissection, there are issues such as low contrast between the aortic dissection and surrounding organs and vessels, significant differences in dissection morphology, and high background noise. To address these issues, this paper proposed a reinforcement learning-based method for type B aortic dissection localization. With the assistance of a two-stage segmentation model, the deep reinforcement learning was utilized to perform the first-stage aortic dissection localization task, ensuring the integrity of the localization target. In the second stage, the coarse segmentation results from the first stage were used as input to obtain refined segmentation results. To improve the recall rate of the first-stage segmentation results and include the segmentation target more completely in the localization results, this paper designed a reinforcement learning reward function based on the direction of recall changes. Additionally, the localization window was separated from the field of view window to reduce the occurrence of segmentation target loss. Unet, TransUnet, SwinUnet, and MT-Unet were selected as benchmark segmentation models. Through experiments, it was verified that the majority of the metrics in the two-stage segmentation process of this paper performed better than the benchmark results. Specifically, the Dice index improved by 1.34%, 0.89%, 27.66%, and 7.37% for each respective model. In conclusion, by incorporating the type B aortic dissection localization method proposed in this paper into the segmentation process, the overall segmentation accuracy is improved compared to the benchmark models. The improvement is particularly significant for models with poorer segmentation performance.</p>","PeriodicalId":39324,"journal":{"name":"生物医学工程学杂志","volume":"41 5","pages":"878-885"},"PeriodicalIF":0.0000,"publicationDate":"2024-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11527745/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"生物医学工程学杂志","FirstCategoryId":"1087","ListUrlMain":"https://doi.org/10.7507/1001-5515.202309047","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"Medicine","Score":null,"Total":0}
引用次数: 0

Abstract

In the segmentation of aortic dissection, there are issues such as low contrast between the aortic dissection and surrounding organs and vessels, significant differences in dissection morphology, and high background noise. To address these issues, this paper proposed a reinforcement learning-based method for type B aortic dissection localization. With the assistance of a two-stage segmentation model, the deep reinforcement learning was utilized to perform the first-stage aortic dissection localization task, ensuring the integrity of the localization target. In the second stage, the coarse segmentation results from the first stage were used as input to obtain refined segmentation results. To improve the recall rate of the first-stage segmentation results and include the segmentation target more completely in the localization results, this paper designed a reinforcement learning reward function based on the direction of recall changes. Additionally, the localization window was separated from the field of view window to reduce the occurrence of segmentation target loss. Unet, TransUnet, SwinUnet, and MT-Unet were selected as benchmark segmentation models. Through experiments, it was verified that the majority of the metrics in the two-stage segmentation process of this paper performed better than the benchmark results. Specifically, the Dice index improved by 1.34%, 0.89%, 27.66%, and 7.37% for each respective model. In conclusion, by incorporating the type B aortic dissection localization method proposed in this paper into the segmentation process, the overall segmentation accuracy is improved compared to the benchmark models. The improvement is particularly significant for models with poorer segmentation performance.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
[基于强化学习的 B 型主动脉夹层定位方法]。
在主动脉夹层的分割中,存在主动脉夹层与周围器官和血管对比度低、夹层形态差异大、背景噪声高等问题。针对这些问题,本文提出了一种基于强化学习的 B 型主动脉夹层定位方法。在两阶段分割模型的辅助下,利用深度强化学习完成第一阶段主动脉夹层定位任务,确保定位目标的完整性。在第二阶段,将第一阶段的粗分割结果作为输入,获得精细分割结果。为了提高第一阶段分割结果的召回率,并将分割目标更完整地纳入定位结果,本文设计了基于召回率变化方向的强化学习奖励函数。此外,还将定位窗口与视场窗口分开,以减少分割目标丢失的发生。本文选择 Unet、TransUnet、SwinUnet 和 MTUnet 作为基准分割模型。通过实验验证,本文两阶段分割过程中的大多数指标都优于基准结果。具体来说,每个模型的 Dice 指数分别提高了 1.34%、0.89%、27.66% 和 7.37%。总之,通过将本文提出的 B 型主动脉夹层定位方法纳入分割过程,与基准模型相比,整体分割准确性得到了提高。对于分割性能较差的模型,这种改进尤为明显。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
生物医学工程学杂志
生物医学工程学杂志 Medicine-Medicine (all)
CiteScore
0.80
自引率
0.00%
发文量
4868
期刊介绍:
期刊最新文献
[A lightweight convolutional neural network for myositis classification from muscle ultrasound images]. [A review on depth perception techniques in organoid images]. [Advances in nanostructured surfaces for enhanced mechano-bactericidal applications]. [Advances in the diagnosis of prostate cancer based on image fusion]. [Analysis of nerve excitability in the dentate gyrus of the hippocampus in cerebral ischaemia-reperfusion mice].
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1