首页 > 最新文献

Machine learning in medical imaging. MLMI (Workshop)最新文献

英文 中文
Understanding Clinical Progression of Late-Life Depression to Alzheimer's Disease Over 5 Years with Structural MRI. 通过结构MRI了解老年抑郁症到阿尔茨海默病的临床进展。
Pub Date : 2022-09-01 DOI: 10.1007/978-3-031-21014-3_27
Lintao Zhang, Minhui Yu, Lihong Wang, David C Steffens, Rong Wu, Guy G Potter, Mingxia Liu

Previous studies have shown that late-life depression (LLD) may be a precursor of neurodegenerative diseases and may increase the risk of dementia. At present, the pathological relationship between LLD and dementia, in particularly Alzheimer's disease (AD) is unclear. Structural MRI (sMRI) can provide objective biomarkers for the computer-aided diagnosis of LLD and AD, providing a promising solution to understand the clinical progression of brain disorders. But few studies have focused on sMRI-based predictive analysis of clinical progression from LLD to AD. In this paper, we develop a deep learning method to predict the clinical progression of LLD to AD up to 5 years after baseline time using T1-weighted structural MRIs. We also analyze several important factors that limit the diagnostic performance of learning-based methods, including data imbalance, small-sample-size, and multi-site data heterogeneity, by leveraging a relatively large-scale database to aid model training. Experimental results on 308 subjects with sMRIs acquired from 2 imaging sites and the publicly available ADNI database demonstrate the potential of deep learning in predicting the clinical progression of LLD to AD. To the best of our knowledge, this is among the first attempts to explore the complex pathophysiological relationship between LLD and AD based on structural MRI using a deep learning method.

先前的研究表明,晚年抑郁(LLD)可能是神经退行性疾病的前兆,并可能增加患痴呆的风险。目前,LLD与痴呆,特别是阿尔茨海默病(AD)的病理关系尚不清楚。结构磁共振成像(sMRI)可以为LLD和AD的计算机辅助诊断提供客观的生物标志物,为了解脑部疾病的临床进展提供了有希望的解决方案。但很少有研究关注基于smri的从LLD到AD临床进展的预测分析。在本文中,我们开发了一种深度学习方法,使用t1加权结构mri预测LLD在基线时间后长达5年的临床进展。我们还分析了限制基于学习的方法诊断性能的几个重要因素,包括数据不平衡、小样本量和多站点数据异质性,通过利用相对大规模的数据库来辅助模型训练。来自2个影像站点和公开可用的ADNI数据库的308名sMRIs受试者的实验结果表明,深度学习在预测LLD到AD的临床进展方面具有潜力。据我们所知,这是首次尝试使用深度学习方法基于结构MRI探索LLD和AD之间复杂的病理生理关系。
{"title":"Understanding Clinical Progression of Late-Life Depression to Alzheimer's Disease Over 5 Years with Structural MRI.","authors":"Lintao Zhang,&nbsp;Minhui Yu,&nbsp;Lihong Wang,&nbsp;David C Steffens,&nbsp;Rong Wu,&nbsp;Guy G Potter,&nbsp;Mingxia Liu","doi":"10.1007/978-3-031-21014-3_27","DOIUrl":"https://doi.org/10.1007/978-3-031-21014-3_27","url":null,"abstract":"<p><p>Previous studies have shown that late-life depression (LLD) may be a precursor of neurodegenerative diseases and may increase the risk of dementia. At present, the pathological relationship between LLD and dementia, in particularly Alzheimer's disease (AD) is unclear. Structural MRI (sMRI) can provide objective biomarkers for the computer-aided diagnosis of LLD and AD, providing a promising solution to understand the clinical progression of brain disorders. But few studies have focused on sMRI-based predictive analysis of clinical progression from LLD to AD. In this paper, we develop a deep learning method to predict the clinical progression of LLD to AD up to 5 years after baseline time using T1-weighted structural MRIs. We also analyze several important factors that limit the diagnostic performance of learning-based methods, including data imbalance, small-sample-size, and multi-site data heterogeneity, by leveraging a relatively large-scale database to aid model training. Experimental results on 308 subjects with sMRIs acquired from 2 imaging sites and the publicly available ADNI database demonstrate the potential of deep learning in predicting the clinical progression of LLD to AD. To the best of our knowledge, this is among the first attempts to explore the complex pathophysiological relationship between LLD and AD based on structural MRI using a deep learning method.</p>","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":"13583 ","pages":"259-268"},"PeriodicalIF":0.0,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9805302/pdf/nihms-1859375.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9838060","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Multi-scale Multi-structure Siamese Network (MMSNet) for Primary Open-Angle Glaucoma Prediction. 用于原发性开角型青光眼预测的多尺度多结构连体网络 (MMSNet)。
Pub Date : 2022-09-01 Epub Date: 2022-12-16 DOI: 10.1007/978-3-031-21014-3_45
Mingquan Lin, Lei Liu, Mae Gorden, Michael Kass, Sarah Van Tassel, Fei Wang, Yifan Peng

Primary open-angle glaucoma (POAG) is one of the leading causes of irreversible blindness in the United States and worldwide. POAG prediction before onset plays an important role in early treatment. Although deep learning methods have been proposed to predict POAG, these methods mainly focus on current status prediction. In addition, all these methods used a single image as input. On the other hand, glaucoma specialists determine a glaucomatous eye by comparing the follow-up optic nerve image with the baseline along with supplementary clinical data. To simulate this process, we proposed a Multi-scale Multi-structure Siamese Network (MMSNet) to predict future POAG event from fundus photographs. The MMSNet consists of two side-outputs for deep supervision and 2D blocks to utilize two-dimensional features to assist classification. The MMSNet network was trained and evaluated on a large dataset: 37,339 fundus photographs from 1,636 Ocular Hypertension Treatment Study (OHTS) participants. Extensive experiments show that MMSNet outperforms the state-of-the-art on two "POAG prediction before onset" tasks. Our AUC are 0.9312 and 0.9507, which are 0.2204 and 0.1490 higher than the state-of-the-art, respectively. In addition, an ablation study is performed to check the contribution of different components. These results highlight the potential of deep learning to assist and enhance the prediction of future POAG event. The proposed network will be publicly available on https://github.com/bionlplab/MMSNet.

原发性开角型青光眼(POAG)是美国乃至全世界造成不可逆转性失明的主要原因之一。在发病前预测 POAG 对早期治疗起着重要作用。虽然已有人提出了预测 POAG 的深度学习方法,但这些方法主要侧重于现状预测。此外,所有这些方法都使用单一图像作为输入。另一方面,青光眼专家通过比较随访视神经图像和基线图像以及补充临床数据来确定是否患有青光眼。为了模拟这一过程,我们提出了一种多尺度多结构连体网络(MMSNet),用于根据眼底照片预测未来的 POAG 事件。MMSNet 由两个用于深度监督的侧输出和二维块组成,利用二维特征来辅助分类。MMSNet 网络在一个大型数据集上进行了训练和评估:37,339 张眼底照片,这些照片来自 1,636 名眼压过高治疗研究(OHTS)参与者。广泛的实验表明,MMSNet 在两项 "发病前预测 POAG "任务中的表现优于最先进的网络。我们的 AUC 分别为 0.9312 和 0.9507,分别比先进水平高出 0.2204 和 0.1490。此外,我们还进行了一项消融研究,以检查不同成分的贡献。这些结果凸显了深度学习在辅助和增强未来 POAG 事件预测方面的潜力。拟议的网络将在 https://github.com/bionlplab/MMSNet 上公开发布。
{"title":"Multi-scale Multi-structure Siamese Network (MMSNet) for Primary Open-Angle Glaucoma Prediction.","authors":"Mingquan Lin, Lei Liu, Mae Gorden, Michael Kass, Sarah Van Tassel, Fei Wang, Yifan Peng","doi":"10.1007/978-3-031-21014-3_45","DOIUrl":"10.1007/978-3-031-21014-3_45","url":null,"abstract":"<p><p>Primary open-angle glaucoma (POAG) is one of the leading causes of irreversible blindness in the United States and worldwide. POAG prediction before onset plays an important role in early treatment. Although deep learning methods have been proposed to predict POAG, these methods mainly focus on current status prediction. In addition, all these methods used a single image as input. On the other hand, glaucoma specialists determine a glaucomatous eye by comparing the follow-up optic nerve image with the baseline along with supplementary clinical data. To simulate this process, we proposed a Multi-scale Multi-structure Siamese Network (MMSNet) to predict future POAG event from fundus photographs. The MMSNet consists of two side-outputs for deep supervision and 2D blocks to utilize two-dimensional features to assist classification. The MMSNet network was trained and evaluated on a large dataset: 37,339 fundus photographs from 1,636 Ocular Hypertension Treatment Study (OHTS) participants. Extensive experiments show that MMSNet outperforms the state-of-the-art on two \"POAG prediction before onset\" tasks. Our AUC are 0.9312 and 0.9507, which are 0.2204 and 0.1490 higher than the state-of-the-art, respectively. In addition, an ablation study is performed to check the contribution of different components. These results highlight the potential of deep learning to assist and enhance the prediction of future POAG event. The proposed network will be publicly available on https://github.com/bionlplab/MMSNet.</p>","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":"13583 ","pages":"436-445"},"PeriodicalIF":0.0,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9844668/pdf/nihms-1864372.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10604661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fast Image-Level MRI Harmonization via Spectrum Analysis. 通过频谱分析实现快速图像级核磁共振成像协调。
Pub Date : 2022-09-01 Epub Date: 2022-12-16 DOI: 10.1007/978-3-031-21014-3_21
Hao Guan, Siyuan Liu, Weili Lin, Pew-Thian Yap, Mingxia Liu

Pooling structural magnetic resonance imaging (MRI) data from different imaging sites helps increase sample size to facilitate machine learning based neuroimage analysis, but usually suffers from significant cross-site and/or cross-scanner data heterogeneity. Existing studies often focus on reducing cross-site and/or cross-scanner heterogeneity at handcrafted feature level targeting specific tasks (e.g., classification or segmentation), limiting their adaptability in clinical practice. Research on image-level MRI harmonization targeting a broad range of applications is very limited. In this paper, we develop a spectrum swapping based image-level MRI harmonization (SSIMH) framework. Different from previous work, our method focuses on alleviating cross-scanner heterogeneity at raw image level. We first construct spectrum analysis to explore the influences of different frequency components on MRI harmonization. We then utilize a spectrum swapping method for the harmonization of raw MRIs acquired by different scanners. Our method does not rely on complex model training, and can be directly applied to fast real-time MRI harmonization. Experimental results on T1- and T2-weighted MRIs of phantom subjects acquired by using different scanners from the public ABCD dataset suggest the effectiveness of our method in structural MRI harmonization at the image level.

汇集来自不同成像部位的结构性磁共振成像(MRI)数据有助于增加样本量,从而促进基于机器学习的神经图像分析,但通常存在显著的跨部位和/或跨扫描仪数据异质性。现有的研究通常侧重于在针对特定任务(如分类或分割)的手工特征水平上减少跨部位和/或跨扫描仪的异质性,从而限制了其在临床实践中的适应性。针对广泛应用的图像级 MRI 协调研究非常有限。在本文中,我们开发了基于频谱交换的图像级磁共振成像协调(SSIMH)框架。与以往的工作不同,我们的方法侧重于减轻原始图像级的跨扫描仪异质性。我们首先构建频谱分析,探索不同频率成分对磁共振成像协调的影响。然后,我们利用频谱交换方法来协调不同扫描仪获取的原始磁共振成像。我们的方法不依赖复杂的模型训练,可直接应用于快速实时磁共振成像协调。在使用公共 ABCD 数据集中的不同扫描仪获取的模型受试者的 T1 和 T2 加权核磁共振成像上的实验结果表明,我们的方法在图像级结构核磁共振成像协调方面非常有效。
{"title":"Fast Image-Level MRI Harmonization via Spectrum Analysis.","authors":"Hao Guan, Siyuan Liu, Weili Lin, Pew-Thian Yap, Mingxia Liu","doi":"10.1007/978-3-031-21014-3_21","DOIUrl":"10.1007/978-3-031-21014-3_21","url":null,"abstract":"<p><p>Pooling structural magnetic resonance imaging (MRI) data from different imaging sites helps increase sample size to facilitate machine learning based neuroimage analysis, but usually suffers from significant cross-site and/or cross-scanner data heterogeneity. Existing studies often focus on reducing cross-site and/or cross-scanner heterogeneity at handcrafted feature level targeting specific tasks (e.g., classification or segmentation), limiting their adaptability in clinical practice. Research on image-level MRI harmonization targeting a broad range of applications is very limited. In this paper, we develop a spectrum swapping based image-level MRI harmonization (SSIMH) framework. Different from previous work, our method focuses on alleviating cross-scanner heterogeneity at <i>raw image level</i>. We first construct <i>spectrum analysis</i> to explore the influences of different frequency components on MRI harmonization. We then utilize a <i>spectrum swapping</i> method for the harmonization of raw MRIs acquired by different scanners. Our method does not rely on complex model training, and can be directly applied to fast real-time MRI harmonization. Experimental results on T1- and T2-weighted MRIs of phantom subjects acquired by using different scanners from the public ABCD dataset suggest the effectiveness of our method in structural MRI harmonization at the image level.</p>","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":"13583 ","pages":"201-209"},"PeriodicalIF":0.0,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9805301/pdf/nihms-1859376.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10467950","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dynamic Linear Transformer for 3D Biomedical Image Segmentation. 用于三维生物医学图像分割的动态线性变换器
Pub Date : 2022-09-01 Epub Date: 2022-12-16 DOI: 10.1007/978-3-031-21014-3_18
Zheyuan Zhang, Ulas Bagci

Transformer-based neural networks have surpassed promising performance on many biomedical image segmentation tasks due to a better global information modeling from the self-attention mechanism. However, most methods are still designed for 2D medical images while ignoring the essential 3D volume information. The main challenge for 3D Transformer-based segmentation methods is the quadratic complexity introduced by the self-attention mechanism [17]. In this paper, we are addressing these two research gaps, lack of 3D methods and computational complexity in Transformers, by proposing a novel Transformer architecture that has an encoder-decoder style architecture with linear complexity. Furthermore, we newly introduce a dynamic token concept to further reduce the token numbers for self-attention calculation. Taking advantage of the global information modeling, we provide uncertainty maps from different hierarchy stages. We evaluate this method on multiple challenging CT pancreas segmentation datasets. Our results show that our novel 3D Transformer-based segmentor could provide promising highly feasible segmentation performance and accurate uncertainty quantification using single annotation. Code is available https://github.com/freshman97/LinTransUNet.

基于变压器的神经网络在许多生物医学图像分割任务中都取得了可喜的成绩,这得益于自注意机制带来的更好的全局信息建模。然而,大多数方法仍然是针对二维医学图像设计的,而忽略了重要的三维体积信息。基于三维变换器的分割方法面临的主要挑战是自注意机制带来的二次复杂性[17]。在本文中,我们提出了一种新颖的变换器架构,它具有线性复杂度的编码器-解码器式架构,从而解决了变换器中缺乏三维方法和计算复杂度这两个研究空白。此外,我们还新引入了动态标记概念,以进一步减少自我关注计算的标记数量。利用全局信息建模的优势,我们提供了来自不同层次阶段的不确定性映射。我们在多个具有挑战性的 CT 胰腺分割数据集上对该方法进行了评估。结果表明,我们基于三维变换器的新型分割器可以提供非常可行的分割性能,并能使用单一注释进行精确的不确定性量化。代码见 https://github.com/freshman97/LinTransUNet。
{"title":"Dynamic Linear Transformer for 3D Biomedical Image Segmentation.","authors":"Zheyuan Zhang, Ulas Bagci","doi":"10.1007/978-3-031-21014-3_18","DOIUrl":"10.1007/978-3-031-21014-3_18","url":null,"abstract":"<p><p>Transformer-based neural networks have surpassed promising performance on many biomedical image segmentation tasks due to a better global information modeling from the self-attention mechanism. However, most methods are still designed for 2D medical images while ignoring the essential 3D volume information. The main challenge for 3D Transformer-based segmentation methods is the quadratic complexity introduced by the self-attention mechanism [17]. In this paper, we are addressing these two research gaps, lack of 3D methods and computational complexity in Transformers, by proposing a novel Transformer architecture that has an encoder-decoder style architecture with linear complexity. Furthermore, we newly introduce a dynamic token concept to further reduce the token numbers for self-attention calculation. Taking advantage of the global information modeling, we provide uncertainty maps from different hierarchy stages. We evaluate this method on multiple challenging CT pancreas segmentation datasets. Our results show that our novel 3D Transformer-based segmentor could provide promising highly feasible segmentation performance and accurate uncertainty quantification using single annotation. Code is available https://github.com/freshman97/LinTransUNet.</p>","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":"13583 ","pages":"171-180"},"PeriodicalIF":0.0,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9911329/pdf/nihms-1870553.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10721278","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Predicting Age-related Macular Degeneration Progression with Longitudinal Fundus Images Using Deep Learning. 利用深度学习纵向眼底图像预测年龄相关性黄斑变性进展。
Pub Date : 2022-09-01 DOI: 10.1007/978-3-031-21014-3_2
Junghwan Lee, Tingyi Wanyan, Qingyu Chen, Tiarnan D L Keenan, Benjamin S Glicksberg, Emily Y Chew, Zhiyong Lu, Fei Wang, Yifan Peng

Accurately predicting a patient's risk of progressing to late age-related macular degeneration (AMD) is difficult but crucial for personalized medicine. While existing risk prediction models for progression to late AMD are useful for triaging patients, none utilizes longitudinal color fundus photographs (CFPs) in a patient's history to estimate the risk of late AMD in a given subsequent time interval. In this work, we seek to evaluate how deep neural networks capture the sequential information in longitudinal CFPs and improve the prediction of 2-year and 5-year risk of progression to late AMD. Specifically, we proposed two deep learning models, CNN-LSTM and CNN-Transformer, which use a Long-Short Term Memory (LSTM) and a Transformer, respectively with convolutional neural networks (CNN), to capture the sequential information in longitudinal CFPs. We evaluated our models in comparison to baselines on the Age-Related Eye Disease Study, one of the largest longitudinal AMD cohorts with CFPs. The proposed models outperformed the baseline models that utilized only single-visit CFPs to predict the risk of late AMD (0.879 vs 0.868 in AUC for 2-year prediction, and 0.879 vs 0.862 for 5-year prediction). Further experiments showed that utilizing longitudinal CFPs over a longer time period was helpful for deep learning models to predict the risk of late AMD. We made the source code available at https://github.com/bionlplab/AMD_prognosis_mlmi2022 to catalyze future works that seek to develop deep learning models for late AMD prediction.

准确预测患者进展到晚期黄斑变性(AMD)的风险是困难的,但对于个性化医疗至关重要。虽然现有的进展到晚期AMD的风险预测模型对患者的分类是有用的,但没有一个利用患者病史中的纵向彩色眼底照片(CFPs)来估计在给定的后续时间间隔内发生晚期AMD的风险。在这项工作中,我们试图评估深度神经网络如何捕获纵向CFPs的序列信息,并提高对2年和5年进展为晚期AMD风险的预测。具体而言,我们提出了CNN-LSTM和CNN-Transformer两种深度学习模型,分别使用长短期记忆(LSTM)和变压器,结合卷积神经网络(CNN)来捕获纵向CFPs中的序列信息。我们将我们的模型与年龄相关眼病研究的基线进行比较,该研究是CFPs中最大的纵向AMD队列之一。所提出的模型优于仅使用单次就诊CFPs预测晚期AMD风险的基线模型(2年预测AUC为0.879 vs 0.868, 5年预测为0.879 vs 0.862)。进一步的实验表明,在更长的时间内使用纵向cfp有助于深度学习模型预测晚期AMD的风险。我们在https://github.com/bionlplab/AMD_prognosis_mlmi2022上提供了源代码,以促进未来寻求开发用于晚期AMD预测的深度学习模型的工作。
{"title":"Predicting Age-related Macular Degeneration Progression with Longitudinal Fundus Images Using Deep Learning.","authors":"Junghwan Lee,&nbsp;Tingyi Wanyan,&nbsp;Qingyu Chen,&nbsp;Tiarnan D L Keenan,&nbsp;Benjamin S Glicksberg,&nbsp;Emily Y Chew,&nbsp;Zhiyong Lu,&nbsp;Fei Wang,&nbsp;Yifan Peng","doi":"10.1007/978-3-031-21014-3_2","DOIUrl":"https://doi.org/10.1007/978-3-031-21014-3_2","url":null,"abstract":"<p><p>Accurately predicting a patient's risk of progressing to late age-related macular degeneration (AMD) is difficult but crucial for personalized medicine. While existing risk prediction models for progression to late AMD are useful for triaging patients, none utilizes longitudinal color fundus photographs (CFPs) in a patient's history to estimate the risk of late AMD in a given subsequent time interval. In this work, we seek to evaluate how deep neural networks capture the sequential information in longitudinal CFPs and improve the prediction of 2-year and 5-year risk of progression to late AMD. Specifically, we proposed two deep learning models, CNN-LSTM and CNN-Transformer, which use a Long-Short Term Memory (LSTM) and a Transformer, respectively with convolutional neural networks (CNN), to capture the sequential information in longitudinal CFPs. We evaluated our models in comparison to baselines on the Age-Related Eye Disease Study, one of the largest longitudinal AMD cohorts with CFPs. The proposed models outperformed the baseline models that utilized only single-visit CFPs to predict the risk of late AMD (0.879 vs 0.868 in AUC for 2-year prediction, and 0.879 vs 0.862 for 5-year prediction). Further experiments showed that utilizing longitudinal CFPs over a longer time period was helpful for deep learning models to predict the risk of late AMD. We made the source code available at https://github.com/bionlplab/AMD_prognosis_mlmi2022 to catalyze future works that seek to develop deep learning models for late AMD prediction.</p>","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":"13583 ","pages":"11-20"},"PeriodicalIF":0.0,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9842432/pdf/nihms-1859202.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10604660","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Harmonization of Multi-site Cortical Data Across the Human Lifespan. 人一生中多部位皮层数据的协调性
Pub Date : 2022-09-01 Epub Date: 2022-12-16 DOI: 10.1007/978-3-031-21014-3_23
Sahar Ahmad, Fang Nan, Ye Wu, Zhengwang Wu, Weili Lin, Li Wang, Gang Li, Di Wu, Pew-Thian Yap

Neuroimaging data harmonization has become a prerequisite in integrative data analytics for standardizing a wide variety of data collected from multiple studies and enabling interdisciplinary research. The lack of standardized image acquisition and computational procedures introduces non-biological variability and inconsistency in multi-site data, complicating downstream statistical analyses. Here, we propose a novel statistical technique to retrospectively harmonize multi-site cortical data collected longitudinally and cross-sectionally between birth and 100 years. We demonstrate that our method can effectively eliminate non-biological disparities from cortical thickness and myelination measurements, while preserving biological variation across the entire lifespan. Our harmonization method will foster large-scale population studies by providing comparable data required for investigating developmental and aging processes.

神经成像数据的统一已成为综合数据分析的先决条件,它能使从多项研究中收集的各种数据标准化,并促进跨学科研究。由于缺乏标准化的图像采集和计算程序,多研究地点数据中存在非生物变异性和不一致性,从而使下游统计分析复杂化。在此,我们提出了一种新的统计技术,用于回顾性地统一从出生到 100 岁之间纵向和横截面采集的多站点皮层数据。我们证明,我们的方法可以有效消除皮质厚度和髓鞘化测量中的非生物差异,同时保留整个生命周期的生物变异。我们的协调方法将为研究发育和衰老过程提供所需的可比数据,从而促进大规模人群研究。
{"title":"Harmonization of Multi-site Cortical Data Across the Human Lifespan.","authors":"Sahar Ahmad, Fang Nan, Ye Wu, Zhengwang Wu, Weili Lin, Li Wang, Gang Li, Di Wu, Pew-Thian Yap","doi":"10.1007/978-3-031-21014-3_23","DOIUrl":"10.1007/978-3-031-21014-3_23","url":null,"abstract":"<p><p>Neuroimaging data harmonization has become a prerequisite in integrative data analytics for standardizing a wide variety of data collected from multiple studies and enabling interdisciplinary research. The lack of standardized image acquisition and computational procedures introduces non-biological variability and inconsistency in multi-site data, complicating downstream statistical analyses. Here, we propose a novel statistical technique to retrospectively harmonize multi-site cortical data collected longitudinally and cross-sectionally between birth and 100 years. We demonstrate that our method can effectively eliminate non-biological disparities from cortical thickness and myelination measurements, while preserving biological variation across the entire lifespan. Our harmonization method will foster large-scale population studies by providing comparable data required for investigating developmental and aging processes.</p>","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":"13583 ","pages":"220-229"},"PeriodicalIF":0.0,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10134963/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9752268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dynamic Linear Transformer for 3D Biomedical Image Segmentation 用于生物医学三维图像分割的动态线性变压器
Pub Date : 2022-06-01 DOI: 10.48550/arXiv.2206.00771
Zheyu Zhang, Ulas Bagci
Transformer-based neural networks have surpassed promising performance on many biomedical image segmentation tasks due to a better global information modeling from the self-attention mechanism. However, most methods are still designed for 2D medical images while ignoring the essential 3D volume information. The main challenge for 3D Transformer-based segmentation methods is the quadratic complexity introduced by the self-attention mechanism [17]. In this paper, we are addressing these two research gaps, lack of 3D methods and computational complexity in Transformers, by proposing a novel Transformer architecture that has an encoder-decoder style architecture with linear complexity. Furthermore, we newly introduce a dynamic token concept to further reduce the token numbers for self-attention calculation. Taking advantage of the global information modeling, we provide uncertainty maps from different hierarchy stages. We evaluate this method on multiple challenging CT pancreas segmentation datasets. Our results show that our novel 3D Transformer-based segmentor could provide promising highly feasible segmentation performance and accurate uncertainty quantification using single annotation. Code is available https://github.com/freshman97/LinTransUNet.
基于变压器的神经网络在许多生物医学图像分割任务中表现出色,这是由于自注意机制提供了更好的全局信息建模。然而,大多数方法仍然是针对二维医学图像设计的,而忽略了基本的三维体信息。基于三维变压器的分割方法面临的主要挑战是自关注机制[17]带来的二次复杂度。在本文中,我们通过提出一种具有线性复杂性的编码器-解码器风格架构的新颖Transformer架构来解决这两个研究空白,即变形金刚中缺乏3D方法和计算复杂性。此外,我们还引入了动态令牌概念,以进一步减少自关注计算的令牌数量。利用全局信息建模的优势,给出了不同层次阶段的不确定性图。我们在多个具有挑战性的CT胰腺分割数据集上评估了该方法。结果表明,基于三维变形器的分割器可以提供高可行的分割性能和精确的不确定度量化。代码可在https://github.com/freshman97/LinTransUNet获得。
{"title":"Dynamic Linear Transformer for 3D Biomedical Image Segmentation","authors":"Zheyu Zhang, Ulas Bagci","doi":"10.48550/arXiv.2206.00771","DOIUrl":"https://doi.org/10.48550/arXiv.2206.00771","url":null,"abstract":"Transformer-based neural networks have surpassed promising performance on many biomedical image segmentation tasks due to a better global information modeling from the self-attention mechanism. However, most methods are still designed for 2D medical images while ignoring the essential 3D volume information. The main challenge for 3D Transformer-based segmentation methods is the quadratic complexity introduced by the self-attention mechanism [17]. In this paper, we are addressing these two research gaps, lack of 3D methods and computational complexity in Transformers, by proposing a novel Transformer architecture that has an encoder-decoder style architecture with linear complexity. Furthermore, we newly introduce a dynamic token concept to further reduce the token numbers for self-attention calculation. Taking advantage of the global information modeling, we provide uncertainty maps from different hierarchy stages. We evaluate this method on multiple challenging CT pancreas segmentation datasets. Our results show that our novel 3D Transformer-based segmentor could provide promising highly feasible segmentation performance and accurate uncertainty quantification using single annotation. Code is available https://github.com/freshman97/LinTransUNet.","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":"10 1","pages":"171-180"},"PeriodicalIF":0.0,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88614639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Correction to: Machine Learning in Medical Imaging 修正:医学成像中的机器学习
Pub Date : 2021-09-21 DOI: 10.1007/978-3-030-87589-3_72
C. Lian, Xiaohuan Cao, I. Rekik, Xuanang Xu, Pingkun Yan
{"title":"Correction to: Machine Learning in Medical Imaging","authors":"C. Lian, Xiaohuan Cao, I. Rekik, Xuanang Xu, Pingkun Yan","doi":"10.1007/978-3-030-87589-3_72","DOIUrl":"https://doi.org/10.1007/978-3-030-87589-3_72","url":null,"abstract":"","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":"114 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75724842","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hierarchical 3D Feature Learning for Pancreas Segmentation. 用于胰腺分割的分层 3D 特征学习
Pub Date : 2021-09-01 Epub Date: 2021-09-21 DOI: 10.1007/978-3-030-87589-3_25
Federica Proietto Salanitri, Giovanni Bellitto, Ismail Irmakci, Simone Palazzo, Ulas Bagci, Concetto Spampinato

We propose a novel 3D fully convolutional deep network for automated pancreas segmentation from both MRI and CT scans. More specifically, the proposed model consists of a 3D encoder that learns to extract volume features at different scales; features taken at different points of the encoder hierarchy are then sent to multiple 3D decoders that individually predict intermediate segmentation maps. Finally, all segmentation maps are combined to obtain a unique detailed segmentation mask. We test our model on both CT and MRI imaging data: the publicly available NIH Pancreas-CT dataset (consisting of 82 contrast-enhanced CTs) and a private MRI dataset (consisting of 40 MRI scans). Experimental results show that our model outperforms existing methods on CT pancreas segmentation, obtaining an average Dice score of about 88%, and yields promising segmentation performance on a very challenging MRI data set (average Dice score is about 77%). Additional control experiments demonstrate that the achieved performance is due to the combination of our 3D fully-convolutional deep network and the hierarchical representation decoding, thus substantiating our architectural design.

我们提出了一种新型三维全卷积深度网络,用于从核磁共振成像和 CT 扫描中自动分割胰腺。更具体地说,所提议的模型由一个三维编码器组成,该编码器可学习提取不同尺度的体积特征;然后,在编码器层次结构的不同点提取的特征被发送到多个三维解码器,这些解码器可单独预测中间分割图。最后,将所有分割图组合起来,得到一个独特的详细分割掩膜。我们在 CT 和 MRI 成像数据上测试了我们的模型:公开的美国国立卫生研究院胰腺 CT 数据集(由 82 个对比增强 CT 组成)和私人 MRI 数据集(由 40 个 MRI 扫描组成)。实验结果表明,我们的模型在胰腺 CT 分割方面的表现优于现有方法,平均 Dice 得分为 88%,在极具挑战性的 MRI 数据集(平均 Dice 得分为 77%)上也取得了可喜的分割性能。额外的对照实验表明,所取得的性能归功于我们的三维全卷积深度网络与分层表示解码的结合,从而证实了我们的架构设计。
{"title":"Hierarchical 3D Feature Learning for Pancreas Segmentation.","authors":"Federica Proietto Salanitri, Giovanni Bellitto, Ismail Irmakci, Simone Palazzo, Ulas Bagci, Concetto Spampinato","doi":"10.1007/978-3-030-87589-3_25","DOIUrl":"10.1007/978-3-030-87589-3_25","url":null,"abstract":"<p><p>We propose a novel 3D fully convolutional deep network for automated pancreas segmentation from both MRI and CT scans. More specifically, the proposed model consists of a 3D encoder that learns to extract volume features at different scales; features taken at different points of the encoder hierarchy are then sent to multiple 3D decoders that individually predict intermediate segmentation maps. Finally, all segmentation maps are combined to obtain a unique detailed segmentation mask. We test our model on both CT and MRI imaging data: the publicly available NIH Pancreas-CT dataset (consisting of 82 contrast-enhanced CTs) and a private MRI dataset (consisting of 40 MRI scans). Experimental results show that our model outperforms existing methods on CT pancreas segmentation, obtaining an average Dice score of about 88%, and yields promising segmentation performance on a very challenging MRI data set (average Dice score is about 77%). Additional control experiments demonstrate that the achieved performance is due to the combination of our 3D fully-convolutional deep network and the hierarchical representation decoding, thus substantiating our architectural design.</p>","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":"12966 ","pages":"238-247"},"PeriodicalIF":0.0,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9921296/pdf/nihms-1871453.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10721275","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Knowledge-Guided Multiview Deep Curriculum Learning for Elbow Fracture Classification. 以知识为导向的肘关节骨折分类多视角深度课程学习。
Pub Date : 2021-09-01 Epub Date: 2021-09-21 DOI: 10.1007/978-3-030-87589-3_57
Jun Luo, Gene Kitamura, Dooman Arefan, Emine Doganay, Ashok Panigrahy, Shandong Wu

Elbow fracture diagnosis often requires patients to take both frontal and lateral views of elbow X-ray radiographs. In this paper, we propose a multiview deep learning method for an elbow fracture subtype classification task. Our strategy leverages transfer learning by first training two single-view models, one for frontal view and the other for lateral view, and then transferring the weights to the corresponding layers in the proposed multiview network architecture. Meanwhile, quantitative medical knowledge was integrated into the training process through a curriculum learning framework, which enables the model to first learn from "easier" samples and then transition to "harder" samples to reach better performance. In addition, our multiview network can work both in a dual-view setting and with a single view as input. We evaluate our method through extensive experiments on a classification task of elbow fracture with a dataset of 1,964 images. Results show that our method outperforms two related methods on bone fracture study in multiple settings, and our technique is able to boost the performance of the compared methods. The code is available at https://github.com/ljaiverson/multiview-curriculum.

肘部骨折的诊断通常需要患者同时拍摄肘部X光片的正面和侧面视图。在本文中,我们提出了一种用于肘部骨折亚型分类任务的多视角深度学习方法。我们的策略利用迁移学习,首先训练两个单视图模型,一个用于正面视图,另一个用于侧面视图,然后将权重转移到所提出的多视图网络架构中的相应层。同时,通过课程学习框架,将定量医学知识融入培训过程,使该模型能够首先从“更容易”的样本中学习,然后过渡到“更难”的样本,以达到更好的表现。此外,我们的多视图网络既可以在双视图设置中工作,也可以将单个视图作为输入。我们通过使用1964张图像的数据集对肘部骨折的分类任务进行广泛的实验来评估我们的方法。结果表明,在多个环境下,我们的方法在骨折研究方面优于两种相关方法,并且我们的技术能够提高比较方法的性能。代码可在https://github.com/ljaiverson/multiview-curriculum.
{"title":"Knowledge-Guided Multiview Deep Curriculum Learning for Elbow Fracture Classification.","authors":"Jun Luo, Gene Kitamura, Dooman Arefan, Emine Doganay, Ashok Panigrahy, Shandong Wu","doi":"10.1007/978-3-030-87589-3_57","DOIUrl":"10.1007/978-3-030-87589-3_57","url":null,"abstract":"<p><p>Elbow fracture diagnosis often requires patients to take both frontal and lateral views of elbow X-ray radiographs. In this paper, we propose a multiview deep learning method for an elbow fracture subtype classification task. Our strategy leverages transfer learning by first training two single-view models, one for frontal view and the other for lateral view, and then transferring the weights to the corresponding layers in the proposed multiview network architecture. Meanwhile, quantitative medical knowledge was integrated into the training process through a curriculum learning framework, which enables the model to first learn from \"easier\" samples and then transition to \"harder\" samples to reach better performance. In addition, our multiview network can work both in a dual-view setting and with a single view as input. We evaluate our method through extensive experiments on a classification task of elbow fracture with a dataset of 1,964 images. Results show that our method outperforms two related methods on bone fracture study in multiple settings, and our technique is able to boost the performance of the compared methods. The code is available at https://github.com/ljaiverson/multiview-curriculum.</p>","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":"12966 ","pages":"555-564"},"PeriodicalIF":0.0,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10557058/pdf/nihms-1933007.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41175565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Machine learning in medical imaging. MLMI (Workshop)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1