MOST:通过持续学习优化多个下行流任务的磁共振重构

Hwihun Jeong, Se Young Chun, Jongho Lee
{"title":"MOST:通过持续学习优化多个下行流任务的磁共振重构","authors":"Hwihun Jeong, Se Young Chun, Jongho Lee","doi":"arxiv-2409.10394","DOIUrl":null,"url":null,"abstract":"Deep learning-based Magnetic Resonance (MR) reconstruction methods have\nfocused on generating high-quality images but they often overlook the impact on\ndownstream tasks (e.g., segmentation) that utilize the reconstructed images.\nCascading separately trained reconstruction network and downstream task network\nhas been shown to introduce performance degradation due to error propagation\nand domain gaps between training datasets. To mitigate this issue, downstream\ntask-oriented reconstruction optimization has been proposed for a single\ndownstream task. Expanding this optimization to multi-task scenarios is not\nstraightforward. In this work, we extended this optimization to sequentially\nintroduced multiple downstream tasks and demonstrated that a single MR\nreconstruction network can be optimized for multiple downstream tasks by\ndeploying continual learning (MOST). MOST integrated techniques from\nreplay-based continual learning and image-guided loss to overcome catastrophic\nforgetting. Comparative experiments demonstrated that MOST outperformed a\nreconstruction network without finetuning, a reconstruction network with\nna\\\"ive finetuning, and conventional continual learning methods. This\nadvancement empowers the application of a single MR reconstruction network for\nmultiple downstream tasks. The source code is available at:\nhttps://github.com/SNU-LIST/MOST","PeriodicalId":501289,"journal":{"name":"arXiv - EE - Image and Video Processing","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"MOST: MR reconstruction Optimization for multiple downStream Tasks via continual learning\",\"authors\":\"Hwihun Jeong, Se Young Chun, Jongho Lee\",\"doi\":\"arxiv-2409.10394\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Deep learning-based Magnetic Resonance (MR) reconstruction methods have\\nfocused on generating high-quality images but they often overlook the impact on\\ndownstream tasks (e.g., segmentation) that utilize the reconstructed images.\\nCascading separately trained reconstruction network and downstream task network\\nhas been shown to introduce performance degradation due to error propagation\\nand domain gaps between training datasets. To mitigate this issue, downstream\\ntask-oriented reconstruction optimization has been proposed for a single\\ndownstream task. Expanding this optimization to multi-task scenarios is not\\nstraightforward. In this work, we extended this optimization to sequentially\\nintroduced multiple downstream tasks and demonstrated that a single MR\\nreconstruction network can be optimized for multiple downstream tasks by\\ndeploying continual learning (MOST). MOST integrated techniques from\\nreplay-based continual learning and image-guided loss to overcome catastrophic\\nforgetting. Comparative experiments demonstrated that MOST outperformed a\\nreconstruction network without finetuning, a reconstruction network with\\nna\\\\\\\"ive finetuning, and conventional continual learning methods. This\\nadvancement empowers the application of a single MR reconstruction network for\\nmultiple downstream tasks. The source code is available at:\\nhttps://github.com/SNU-LIST/MOST\",\"PeriodicalId\":501289,\"journal\":{\"name\":\"arXiv - EE - Image and Video Processing\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - EE - Image and Video Processing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.10394\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - EE - Image and Video Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.10394","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

基于深度学习的磁共振(MR)重建方法专注于生成高质量图像,但往往忽略了对利用重建图像的下游任务(如分割)的影响。为了缓解这一问题,针对单下游任务提出了面向下游任务的重建优化。将这种优化扩展到多任务场景并非易事。在这项工作中,我们将这一优化扩展到连续引入的多个下游任务,并证明通过部署持续学习(MOST),可以针对多个下游任务优化单个磁共振重建网络。MOST 整合了基于回放的持续学习和图像引导损失技术,以克服灾难性遗忘。对比实验表明,MOST的性能优于不带微调的重建网络、带微调的重建网络和传统的持续学习方法。这一进步有助于将单一的磁共振重建网络应用于多个下游任务。源代码见:https://github.com/SNU-LIST/MOST
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
MOST: MR reconstruction Optimization for multiple downStream Tasks via continual learning
Deep learning-based Magnetic Resonance (MR) reconstruction methods have focused on generating high-quality images but they often overlook the impact on downstream tasks (e.g., segmentation) that utilize the reconstructed images. Cascading separately trained reconstruction network and downstream task network has been shown to introduce performance degradation due to error propagation and domain gaps between training datasets. To mitigate this issue, downstream task-oriented reconstruction optimization has been proposed for a single downstream task. Expanding this optimization to multi-task scenarios is not straightforward. In this work, we extended this optimization to sequentially introduced multiple downstream tasks and demonstrated that a single MR reconstruction network can be optimized for multiple downstream tasks by deploying continual learning (MOST). MOST integrated techniques from replay-based continual learning and image-guided loss to overcome catastrophic forgetting. Comparative experiments demonstrated that MOST outperformed a reconstruction network without finetuning, a reconstruction network with na\"ive finetuning, and conventional continual learning methods. This advancement empowers the application of a single MR reconstruction network for multiple downstream tasks. The source code is available at: https://github.com/SNU-LIST/MOST
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
multiPI-TransBTS: A Multi-Path Learning Framework for Brain Tumor Image Segmentation Based on Multi-Physical Information Autopet III challenge: Incorporating anatomical knowledge into nnUNet for lesion segmentation in PET/CT Denoising diffusion models for high-resolution microscopy image restoration Tumor aware recurrent inter-patient deformable image registration of computed tomography scans with lung cancer Cross-Organ and Cross-Scanner Adenocarcinoma Segmentation using Rein to Fine-tune Vision Foundation Models
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1