以先验知识为指导,基于视觉变换器的无监督领域适配,用于肺病一周内的插管预测。

IF 5.4 2区 医学 Q1 ENGINEERING, BIOMEDICAL Computerized Medical Imaging and Graphics Pub Date : 2024-10-15 DOI:10.1016/j.compmedimag.2024.102442
Junlin Yang , John Anderson Garcia Henao , Nicha Dvornek , Jianchun He , Danielle V. Bower , Arno Depotter , Herkus Bajercius , Aurélie Pahud de Mortanges , Chenyu You , Christopher Gange , Roberta Eufrasia Ledda , Mario Silva , Charles S. Dela Cruz , Wolf Hautz , Harald M. Bonel , Mauricio Reyes , Lawrence H. Staib , Alexander Poellinger , James S. Duncan
{"title":"以先验知识为指导,基于视觉变换器的无监督领域适配,用于肺病一周内的插管预测。","authors":"Junlin Yang ,&nbsp;John Anderson Garcia Henao ,&nbsp;Nicha Dvornek ,&nbsp;Jianchun He ,&nbsp;Danielle V. Bower ,&nbsp;Arno Depotter ,&nbsp;Herkus Bajercius ,&nbsp;Aurélie Pahud de Mortanges ,&nbsp;Chenyu You ,&nbsp;Christopher Gange ,&nbsp;Roberta Eufrasia Ledda ,&nbsp;Mario Silva ,&nbsp;Charles S. Dela Cruz ,&nbsp;Wolf Hautz ,&nbsp;Harald M. Bonel ,&nbsp;Mauricio Reyes ,&nbsp;Lawrence H. Staib ,&nbsp;Alexander Poellinger ,&nbsp;James S. Duncan","doi":"10.1016/j.compmedimag.2024.102442","DOIUrl":null,"url":null,"abstract":"<div><div>Data-driven approaches have achieved great success in various medical image analysis tasks. However, fully-supervised data-driven approaches require unprecedentedly large amounts of labeled data and often suffer from poor generalization to unseen new data due to domain shifts. Various unsupervised domain adaptation (UDA) methods have been actively explored to solve these problems. Anatomical and spatial priors in medical imaging are common and have been incorporated into data-driven approaches to ease the need for labeled data as well as to achieve better generalization and interpretation. Inspired by the effectiveness of recent transformer-based methods in medical image analysis, the adaptability of transformer-based models has been investigated. How to incorporate prior knowledge for transformer-based UDA models remains under-explored. In this paper, we introduce a prior knowledge-guided and transformer-based unsupervised domain adaptation (PUDA) pipeline. It regularizes the vision transformer attention heads using anatomical and spatial prior information that is shared by both the source and target domain, which provides additional insight into the similarity between the underlying data distribution across domains. Besides the global alignment of class tokens, it assigns local weights to guide the token distribution alignment via adversarial training. We evaluate our proposed method on a clinical outcome prediction task, where Computed Tomography (CT) and Chest X-ray (CXR) data are collected and used to predict the intubation status of patients in a week. Abnormal lesions are regarded as anatomical and spatial prior information for this task and are annotated in the source domain scans. Extensive experiments show the effectiveness of the proposed PUDA method.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"118 ","pages":"Article 102442"},"PeriodicalIF":5.4000,"publicationDate":"2024-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Prior knowledge-guided vision-transformer-based unsupervised domain adaptation for intubation prediction in lung disease at one week\",\"authors\":\"Junlin Yang ,&nbsp;John Anderson Garcia Henao ,&nbsp;Nicha Dvornek ,&nbsp;Jianchun He ,&nbsp;Danielle V. Bower ,&nbsp;Arno Depotter ,&nbsp;Herkus Bajercius ,&nbsp;Aurélie Pahud de Mortanges ,&nbsp;Chenyu You ,&nbsp;Christopher Gange ,&nbsp;Roberta Eufrasia Ledda ,&nbsp;Mario Silva ,&nbsp;Charles S. Dela Cruz ,&nbsp;Wolf Hautz ,&nbsp;Harald M. Bonel ,&nbsp;Mauricio Reyes ,&nbsp;Lawrence H. Staib ,&nbsp;Alexander Poellinger ,&nbsp;James S. Duncan\",\"doi\":\"10.1016/j.compmedimag.2024.102442\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Data-driven approaches have achieved great success in various medical image analysis tasks. However, fully-supervised data-driven approaches require unprecedentedly large amounts of labeled data and often suffer from poor generalization to unseen new data due to domain shifts. Various unsupervised domain adaptation (UDA) methods have been actively explored to solve these problems. Anatomical and spatial priors in medical imaging are common and have been incorporated into data-driven approaches to ease the need for labeled data as well as to achieve better generalization and interpretation. Inspired by the effectiveness of recent transformer-based methods in medical image analysis, the adaptability of transformer-based models has been investigated. How to incorporate prior knowledge for transformer-based UDA models remains under-explored. In this paper, we introduce a prior knowledge-guided and transformer-based unsupervised domain adaptation (PUDA) pipeline. It regularizes the vision transformer attention heads using anatomical and spatial prior information that is shared by both the source and target domain, which provides additional insight into the similarity between the underlying data distribution across domains. Besides the global alignment of class tokens, it assigns local weights to guide the token distribution alignment via adversarial training. We evaluate our proposed method on a clinical outcome prediction task, where Computed Tomography (CT) and Chest X-ray (CXR) data are collected and used to predict the intubation status of patients in a week. Abnormal lesions are regarded as anatomical and spatial prior information for this task and are annotated in the source domain scans. Extensive experiments show the effectiveness of the proposed PUDA method.</div></div>\",\"PeriodicalId\":50631,\"journal\":{\"name\":\"Computerized Medical Imaging and Graphics\",\"volume\":\"118 \",\"pages\":\"Article 102442\"},\"PeriodicalIF\":5.4000,\"publicationDate\":\"2024-10-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computerized Medical Imaging and Graphics\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0895611124001198\",\"RegionNum\":2,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, BIOMEDICAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computerized Medical Imaging and Graphics","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0895611124001198","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
引用次数: 0

摘要

数据驱动方法在各种医学图像分析任务中取得了巨大成功。然而,完全监督的数据驱动方法需要前所未有的大量标注数据,而且由于领域转移,对未见过的新数据的泛化能力往往很差。为了解决这些问题,人们积极探索各种无监督领域适应(UDA)方法。解剖学和空间先验在医学成像中很常见,已被纳入数据驱动方法,以缓解对标记数据的需求,并实现更好的泛化和解释。受最近基于变换器的方法在医学图像分析中的有效性启发,人们对基于变换器的模型的适应性进行了研究。如何将先验知识纳入基于变压器的 UDA 模型仍未得到充分探讨。在本文中,我们介绍了一种以先验知识为指导、基于变换器的无监督域自适应(PUDA)管道。它利用源域和目标域共享的解剖学和空间先验信息对视觉变换器注意头进行正则化,从而提供了对跨域底层数据分布相似性的额外洞察。除了类标记的全局对齐外,它还通过对抗训练分配局部权重来指导标记分布的对齐。我们在一项临床结果预测任务中评估了我们提出的方法,该任务收集了计算机断层扫描(CT)和胸部 X 光片(CXR)数据,用于预测一周内患者的插管状态。异常病变被视为这项任务的解剖和空间先验信息,并在源域扫描中进行注释。广泛的实验表明了所提出的 PUDA 方法的有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Prior knowledge-guided vision-transformer-based unsupervised domain adaptation for intubation prediction in lung disease at one week
Data-driven approaches have achieved great success in various medical image analysis tasks. However, fully-supervised data-driven approaches require unprecedentedly large amounts of labeled data and often suffer from poor generalization to unseen new data due to domain shifts. Various unsupervised domain adaptation (UDA) methods have been actively explored to solve these problems. Anatomical and spatial priors in medical imaging are common and have been incorporated into data-driven approaches to ease the need for labeled data as well as to achieve better generalization and interpretation. Inspired by the effectiveness of recent transformer-based methods in medical image analysis, the adaptability of transformer-based models has been investigated. How to incorporate prior knowledge for transformer-based UDA models remains under-explored. In this paper, we introduce a prior knowledge-guided and transformer-based unsupervised domain adaptation (PUDA) pipeline. It regularizes the vision transformer attention heads using anatomical and spatial prior information that is shared by both the source and target domain, which provides additional insight into the similarity between the underlying data distribution across domains. Besides the global alignment of class tokens, it assigns local weights to guide the token distribution alignment via adversarial training. We evaluate our proposed method on a clinical outcome prediction task, where Computed Tomography (CT) and Chest X-ray (CXR) data are collected and used to predict the intubation status of patients in a week. Abnormal lesions are regarded as anatomical and spatial prior information for this task and are annotated in the source domain scans. Extensive experiments show the effectiveness of the proposed PUDA method.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
10.70
自引率
3.50%
发文量
71
审稿时长
26 days
期刊介绍: The purpose of the journal Computerized Medical Imaging and Graphics is to act as a source for the exchange of research results concerning algorithmic advances, development, and application of digital imaging in disease detection, diagnosis, intervention, prevention, precision medicine, and population health. Included in the journal will be articles on novel computerized imaging or visualization techniques, including artificial intelligence and machine learning, augmented reality for surgical planning and guidance, big biomedical data visualization, computer-aided diagnosis, computerized-robotic surgery, image-guided therapy, imaging scanning and reconstruction, mobile and tele-imaging, radiomics, and imaging integration and modeling with other information relevant to digital health. The types of biomedical imaging include: magnetic resonance, computed tomography, ultrasound, nuclear medicine, X-ray, microwave, optical and multi-photon microscopy, video and sensory imaging, and the convergence of biomedical images with other non-imaging datasets.
期刊最新文献
DSIFNet: Implicit feature network for nasal cavity and vestibule segmentation from 3D head CT AFSegNet: few-shot 3D ankle-foot bone segmentation via hierarchical feature distillation and multi-scale attention and fusion VLFATRollout: Fully transformer-based classifier for retinal OCT volumes WISE: Efficient WSI selection for active learning in histopathology RPDNet: A reconstruction-regularized parallel decoders network for rectal tumor and rectum co-segmentation
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1