首页 > 最新文献

Machine learning in medical imaging. MLMI (Workshop)最新文献

英文 中文
Globally-Aware Multiple Instance Classifier for Breast Cancer Screening. 用于乳腺癌筛查的全局感知多实例分类器
Pub Date : 2019-10-01 Epub Date: 2019-10-10 DOI: 10.1007/978-3-030-32692-0_3
Yiqiu Shen, Nan Wu, Jason Phang, Jungkyu Park, Gene Kim, Linda Moy, Kyunghyun Cho, Krzysztof J Geras

Deep learning models designed for visual classification tasks on natural images have become prevalent in medical image analysis. However, medical images differ from typical natural images in many ways, such as significantly higher resolutions and smaller regions of interest. Moreover, both the global structure and local details play important roles in medical image analysis tasks. To address these unique properties of medical images, we propose a neural network that is able to classify breast cancer lesions utilizing information from both a global saliency map and multiple local patches. The proposed model outperforms the ResNet-based baseline and achieves radiologist-level performance in the interpretation of screening mammography. Although our model is trained only with image-level labels, it is able to generate pixel-level saliency maps that provide localization of possible malignant findings.

为自然图像的视觉分类任务而设计的深度学习模型在医学图像分析中非常流行。然而,医学图像在很多方面都不同于典型的自然图像,比如分辨率明显更高,感兴趣区域更小。此外,全局结构和局部细节在医学图像分析任务中都起着重要作用。针对医学图像的这些独特性质,我们提出了一种神经网络,它能够利用全局突出图和多个局部斑块的信息对乳腺癌病灶进行分类。我们提出的模型优于基于 ResNet 的基线模型,在乳腺 X 线照相术筛查的判读中达到了放射科医生的水平。虽然我们的模型仅使用图像级标签进行训练,但它能够生成像素级的显著性地图,为可能的恶性发现提供定位。
{"title":"Globally-Aware Multiple Instance Classifier for Breast Cancer Screening.","authors":"Yiqiu Shen, Nan Wu, Jason Phang, Jungkyu Park, Gene Kim, Linda Moy, Kyunghyun Cho, Krzysztof J Geras","doi":"10.1007/978-3-030-32692-0_3","DOIUrl":"10.1007/978-3-030-32692-0_3","url":null,"abstract":"<p><p>Deep learning models designed for visual classification tasks on natural images have become prevalent in medical image analysis. However, medical images differ from typical natural images in many ways, such as significantly higher resolutions and smaller regions of interest. Moreover, both the global structure and local details play important roles in medical image analysis tasks. To address these unique properties of medical images, we propose a neural network that is able to classify breast cancer lesions utilizing information from both a global saliency map and multiple local patches. The proposed model outperforms the ResNet-based baseline and achieves radiologist-level performance in the interpretation of screening mammography. Although our model is trained only with image-level labels, it is able to generate pixel-level saliency maps that provide localization of possible malignant findings.</p>","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":" ","pages":"18-26"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7060084/pdf/nihms-1551235.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37717279","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Jointly Discriminative and Generative Recurrent Neural Networks for Learning from fMRI. 用于fMRI学习的联合判别和生成递归神经网络。
Pub Date : 2019-10-01 Epub Date: 2019-10-10 DOI: 10.1007/978-3-030-32692-0_44
Nicha C Dvornek, Xiaoxiao Li, Juntang Zhuang, James S Duncan

Recurrent neural networks (RNNs) were designed for dealing with time-series data and have recently been used for creating predictive models from functional magnetic resonance imaging (fMRI) data. However, gathering large fMRI datasets for learning is a difficult task. Furthermore, network interpretability is unclear. To address these issues, we utilize multitask learning and design a novel RNN-based model that learns to discriminate between classes while simultaneously learning to generate the fMRI time-series data. Employing the long short-term memory (LSTM) structure, we develop a discriminative model based on the hidden state and a generative model based on the cell state. The addition of the generative model constrains the network to learn functional communities represented by the LSTM nodes that are both consistent with the data generation as well as useful for the classification task. We apply our approach to the classification of subjects with autism vs. healthy controls using several datasets from the Autism Brain Imaging Data Exchange. Experiments show that our jointly discriminative and generative model improves classification learning while also producing robust and meaningful functional communities for better model understanding.

递归神经网络(RNNs)是为处理时间序列数据而设计的,最近被用于从功能磁共振成像(fMRI)数据创建预测模型。然而,收集用于学习的大型fMRI数据集是一项艰巨的任务。此外,网络的可解释性是不明确的。为了解决这些问题,我们利用多任务学习并设计了一种新的基于rnn的模型,该模型在学习区分类别的同时学习生成fMRI时间序列数据。利用长短期记忆(LSTM)结构,建立了基于隐藏状态的判别模型和基于单元状态的生成模型。生成模型的加入约束了网络学习由LSTM节点表示的功能社区,这些功能社区既与数据生成一致,又对分类任务有用。我们使用来自自闭症脑成像数据交换的几个数据集,将我们的方法应用于自闭症受试者与健康对照的分类。实验表明,我们的联合判别和生成模型提高了分类学习,同时也产生了鲁棒和有意义的功能社区,以更好地理解模型。
{"title":"Jointly Discriminative and Generative Recurrent Neural Networks for Learning from fMRI.","authors":"Nicha C Dvornek,&nbsp;Xiaoxiao Li,&nbsp;Juntang Zhuang,&nbsp;James S Duncan","doi":"10.1007/978-3-030-32692-0_44","DOIUrl":"https://doi.org/10.1007/978-3-030-32692-0_44","url":null,"abstract":"<p><p>Recurrent neural networks (RNNs) were designed for dealing with time-series data and have recently been used for creating predictive models from functional magnetic resonance imaging (fMRI) data. However, gathering large fMRI datasets for learning is a difficult task. Furthermore, network interpretability is unclear. To address these issues, we utilize multitask learning and design a novel RNN-based model that learns to discriminate between classes while simultaneously learning to generate the fMRI time-series data. Employing the long short-term memory (LSTM) structure, we develop a discriminative model based on the hidden state and a generative model based on the cell state. The addition of the generative model constrains the network to learn functional communities represented by the LSTM nodes that are both consistent with the data generation as well as useful for the classification task. We apply our approach to the classification of subjects with autism vs. healthy controls using several datasets from the Autism Brain Imaging Data Exchange. Experiments show that our jointly discriminative and generative model improves classification learning while also producing robust and meaningful functional communities for better model understanding.</p>","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":" ","pages":"382-390"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7143657/pdf/nihms-1567698.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37820368","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
Distanced LSTM: Time-Distanced Gates in Long Short-Term Memory Models for Lung Cancer Detection. 远距离LSTM:肺癌检测长短期记忆模型中的时间间隔门。
Pub Date : 2019-10-01 Epub Date: 2019-10-10
Riqiang Gao, Yuankai Huo, Shunxing Bao, Yucheng Tang, Sanja L Antic, Emily S Epstein, Aneri B Balar, Steve Deppen, Alexis B Paulson, Kim L Sandler, Pierre P Massion, Bennett A Landman

The field of lung nodule detection and cancer prediction has been rapidly developing with the support of large public data archives. Previous studies have largely focused cross-sectional (single) CT data. Herein, we consider longitudinal data. The Long Short-Term Memory (LSTM) model addresses learning with regularly spaced time points (i.e., equal temporal intervals). However, clinical imaging follows patient needs with often heterogeneous, irregular acquisitions. To model both regular and irregular longitudinal samples, we generalize the LSTM model with the Distanced LSTM (DLSTM) for temporally varied acquisitions. The DLSTM includes a Temporal Emphasis Model (TEM) that enables learning across regularly and irregularly sampled intervals. Briefly, (1) the temporal intervals between longitudinal scans are modeled explicitly, (2) temporally adjustable forget and input gates are introduced for irregular temporal sampling; and (3) the latest longitudinal scan has an additional emphasis term. We evaluate the DLSTM framework in three datasets including simulated data, 1794 National Lung Screening Trial (NLST) scans, and 1420 clinically acquired data with heterogeneous and irregular temporal accession. The experiments on the first two datasets demonstrate that our method achieves competitive performance on both simulated and regularly sampled datasets (e.g. improve LSTM from 0.6785 to 0.7085 on F1 score in NLST). In external validation of clinically and irregularly acquired data, the benchmarks achieved 0.8350 (CNN feature) and 0.8380 (LSTM) on area under the ROC curve (AUC) score, while the proposed DLSTM achieves 0.8905.

在大型公共数据档案的支持下,肺结节检测和癌症预测领域得到了快速发展。以前的研究主要集中在横断面(单)CT数据上。在这里,我们考虑纵向数据。长短期记忆(LSTM)模型处理有规则间隔的时间点(即,相等的时间间隔)的学习。然而,临床影像遵循患者的需要,往往是异质的,不规则的采集。为了对规则和不规则的纵向样本进行建模,我们使用距离LSTM (DLSTM)来推广LSTM模型,用于时间变化的采集。DLSTM包括一个时间重点模型(TEM),使学习跨越定期和不规则采样间隔。简而言之,(1)纵向扫描之间的时间间隔明确建模;(2)引入时间可调的遗忘门和输入门进行不规则时间采样;(3)最新的纵向扫描有一个额外的强调项。我们在三个数据集中评估DLSTM框架,包括模拟数据、1794个国家肺筛查试验(NLST)扫描和1420个具有异质性和不规则时间加入的临床数据。在前两个数据集上的实验表明,我们的方法在模拟和常规采样数据集上都取得了具有竞争力的性能(例如,将NLST中F1分数的LSTM从0.6785提高到0.7085)。在临床和不规则采集数据的外部验证中,基准在ROC曲线下面积(AUC)得分上达到0.8350 (CNN feature)和0.8380 (LSTM),而所提出的DLSTM达到0.8905。
{"title":"Distanced LSTM: Time-Distanced Gates in Long Short-Term Memory Models for Lung Cancer Detection.","authors":"Riqiang Gao,&nbsp;Yuankai Huo,&nbsp;Shunxing Bao,&nbsp;Yucheng Tang,&nbsp;Sanja L Antic,&nbsp;Emily S Epstein,&nbsp;Aneri B Balar,&nbsp;Steve Deppen,&nbsp;Alexis B Paulson,&nbsp;Kim L Sandler,&nbsp;Pierre P Massion,&nbsp;Bennett A Landman","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>The field of lung nodule detection and cancer prediction has been rapidly developing with the support of large public data archives. Previous studies have largely focused cross-sectional (single) CT data. Herein, we consider longitudinal data. The Long Short-Term Memory (LSTM) model addresses learning with regularly spaced time points (i.e., equal temporal intervals). However, clinical imaging follows patient needs with often heterogeneous, irregular acquisitions. To model both regular and irregular longitudinal samples, we generalize the LSTM model with the Distanced LSTM (DLSTM) for temporally varied acquisitions. The DLSTM includes a Temporal Emphasis Model (TEM) that enables learning across regularly and irregularly sampled intervals. Briefly, (1) the temporal intervals between longitudinal scans are modeled explicitly, (2) temporally adjustable forget and input gates are introduced for irregular temporal sampling; and (3) the latest longitudinal scan has an additional emphasis term. We evaluate the DLSTM framework in three datasets including simulated data, 1794 National Lung Screening Trial (NLST) scans, and 1420 clinically acquired data with heterogeneous and irregular temporal accession. The experiments on the first two datasets demonstrate that our method achieves competitive performance on both simulated and regularly sampled datasets (e.g. improve LSTM from 0.6785 to 0.7085 on F1 score in NLST). In external validation of clinically and irregularly acquired data, the benchmarks achieved 0.8350 (CNN feature) and 0.8380 (LSTM) on area under the ROC curve (AUC) score, while the proposed DLSTM achieves 0.8905.</p>","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":" ","pages":"310-318"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8148226/pdf/nihms-1062384.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39035902","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-Scale Attentional Network for Multi-Focal Segmentation of Active Bleed after Pelvic Fractures. 骨盆骨折后活动性出血多焦点分割的多尺度注意网络。
Pub Date : 2019-10-01 DOI: 10.1007/978-3-030-32692-0_53
Yuyin Zhou, David Dreizin, Yingwei Li, Zhishuai Zhang, Yan Wang, Alan Yuille

Trauma is the worldwide leading cause of death and disability in those younger than 45 years, and pelvic fractures are a major source of morbidity and mortality. Automated segmentation of multiple foci of arterial bleeding from ab-dominopelvic trauma CT could provide rapid objective measurements of the total extent of active bleeding, potentially augmenting outcome prediction at the point of care, while improving patient triage, allocation of appropriate resources, and time to definitive intervention. In spite of the importance of active bleeding in the quick tempo of trauma care, the task is still quite challenging due to the variable contrast, intensity, location, size, shape, and multiplicity of bleeding foci. Existing work presents a heuristic rule-based segmentation technique which requires multiple stages and cannot be efficiently optimized end-to-end. To this end, we present, Multi-Scale Attentional Network (MSAN), the first yet reliable end-to-end network, for automated segmentation of active hemorrhage from contrast-enhanced trauma CT scans. MSAN consists of the following components: 1) an encoder which fully integrates the global contextual information from holistic 2D slices; 2) a multi-scale strategy applied both in the training stage and the inference stage to handle the challenges induced by variation of target sizes; 3) an attentional module to further refine the deep features, leading to better segmentation quality; and 4) a multi-view mechanism to leverage the 3D information. MSAN reports a significant improvement of more than 7% compared to prior arts in terms of DSC.

创伤是全世界45岁以下人群死亡和残疾的主要原因,骨盆骨折是发病率和死亡率的主要来源。骨盆外伤CT对动脉出血的多个病灶进行自动分割,可以快速客观地测量活动性出血的总范围,潜在地增加护理点的结果预测,同时改善患者分诊,分配适当的资源,并缩短最终干预的时间。尽管活动性出血在快速创伤护理中的重要性,但由于出血灶的对比度、强度、位置、大小、形状和多样性的变化,这项任务仍然相当具有挑战性。现有的工作是一种启发式的基于规则的分割技术,需要多个阶段,不能有效地优化端到端。为此,我们提出了多尺度注意力网络(MSAN),这是第一个可靠的端到端网络,用于从对比增强创伤CT扫描中自动分割活动性出血。MSAN由以下组件组成:1)编码器,该编码器完全集成了来自整体二维切片的全局上下文信息;2)在训练阶段和推理阶段同时采用多尺度策略,以应对目标大小变化带来的挑战;3)关注模块,进一步细化深度特征,提高分割质量;4)利用三维信息的多视图机制。MSAN报告说,在DSC方面,与现有技术相比,显著改善了7%以上。
{"title":"Multi-Scale Attentional Network for Multi-Focal Segmentation of Active Bleed after Pelvic Fractures.","authors":"Yuyin Zhou,&nbsp;David Dreizin,&nbsp;Yingwei Li,&nbsp;Zhishuai Zhang,&nbsp;Yan Wang,&nbsp;Alan Yuille","doi":"10.1007/978-3-030-32692-0_53","DOIUrl":"https://doi.org/10.1007/978-3-030-32692-0_53","url":null,"abstract":"<p><p>Trauma is the worldwide leading cause of death and disability in those younger than 45 years, and pelvic fractures are a major source of morbidity and mortality. Automated segmentation of multiple foci of arterial bleeding from ab-dominopelvic trauma CT could provide rapid objective measurements of the total extent of active bleeding, potentially augmenting outcome prediction at the point of care, while improving patient triage, allocation of appropriate resources, and time to definitive intervention. In spite of the importance of active bleeding in the quick tempo of trauma care, the task is still quite challenging due to the variable contrast, intensity, location, size, shape, and multiplicity of bleeding foci. Existing work presents a heuristic rule-based segmentation technique which requires multiple stages and cannot be efficiently optimized end-to-end. To this end, we present, Multi-Scale Attentional Network (MSAN), the first yet reliable end-to-end network, for automated segmentation of active hemorrhage from contrast-enhanced trauma CT scans. MSAN consists of the following components: 1) an encoder which fully integrates the global contextual information from holistic 2D slices; 2) a multi-scale strategy applied both in the training stage and the inference stage to handle the challenges induced by variation of target sizes; 3) an attentional module to further refine the deep features, leading to better segmentation quality; and 4) a multi-view mechanism to leverage the 3D information. MSAN reports a significant improvement of more than 7% compared to prior arts in terms of DSC.</p>","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":"11861 ","pages":"461-469"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10314367/pdf/nihms-1912145.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9806914","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Machine Learning in Medical Imaging: 10th International Workshop, MLMI 2019, Held in Conjunction with MICCAI 2019, Shenzhen, China, October 13, 2019, Proceedings 医学成像中的机器学习:第十届国际研讨会,MLMI 2019,与MICCAI 2019一起举行,中国深圳,2019年10月13日,会议录
Pub Date : 2019-01-01 DOI: 10.1007/978-3-030-32692-0
{"title":"Machine Learning in Medical Imaging: 10th International Workshop, MLMI 2019, Held in Conjunction with MICCAI 2019, Shenzhen, China, October 13, 2019, Proceedings","authors":"","doi":"10.1007/978-3-030-32692-0","DOIUrl":"https://doi.org/10.1007/978-3-030-32692-0","url":null,"abstract":"","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":"112 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80341634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
End-To-End Alzheimer's Disease Diagnosis and Biomarker Identification. 端到端阿尔茨海默病诊断和生物标记物鉴定。
Pub Date : 2018-09-01 Epub Date: 2018-09-15 DOI: 10.1007/978-3-030-00919-9_39
Soheil Esmaeilzadeh, Dimitrios Ioannis Belivanis, Kilian M Pohl, Ehsan Adeli

As shown in computer vision, the power of deep learning lies in automatically learning relevant and powerful features for any perdition task, which is made possible through end-to-end architectures. However, deep learning approaches applied for classifying medical images do not adhere to this architecture as they rely on several pre- and post-processing steps. This shortcoming can be explained by the relatively small number of available labeled subjects, the high dimensionality of neuroimaging data, and difficulties in interpreting the results of deep learning methods. In this paper, we propose a simple 3D Convolutional Neural Networks and exploit its model parameters to tailor the end-to-end architecture for the diagnosis of Alzheimer's disease (AD). Our model can diagnose AD with an accuracy of 94.1% on the popular ADNI dataset using only MRI data, which outperforms the previous state-of-the-art. Based on the learned model, we identify the disease biomarkers, the results of which were in accordance with the literature. We further transfer the learned model to diagnose mild cognitive impairment (MCI), the prodromal stage of AD, which yield better results compared to other methods.

正如计算机视觉所显示的那样,深度学习的威力在于自动学习相关的、强大的特征,并通过端到端架构来实现。然而,应用于医学图像分类的深度学习方法并不遵循这一架构,因为它们依赖于多个前处理和后处理步骤。造成这一缺陷的原因包括:可用的标注受试者数量相对较少、神经成像数据的维度较高,以及难以解释深度学习方法的结果。在本文中,我们提出了一种简单的三维卷积神经网络,并利用其模型参数来定制用于诊断阿尔茨海默病(AD)的端到端架构。在流行的 ADNI 数据集上,我们的模型仅使用核磁共振成像数据就能以 94.1% 的准确率诊断出阿尔茨海默病,优于之前的先进水平。根据学习到的模型,我们确定了疾病生物标志物,其结果与文献相符。我们进一步将学习到的模型用于诊断轻度认知障碍(MCI),即老年痴呆症的前驱阶段,结果优于其他方法。
{"title":"End-To-End Alzheimer's Disease Diagnosis and Biomarker Identification.","authors":"Soheil Esmaeilzadeh, Dimitrios Ioannis Belivanis, Kilian M Pohl, Ehsan Adeli","doi":"10.1007/978-3-030-00919-9_39","DOIUrl":"10.1007/978-3-030-00919-9_39","url":null,"abstract":"<p><p>As shown in computer vision, the power of deep learning lies in automatically learning relevant and powerful features for any perdition task, which is made possible through end-to-end architectures. However, deep learning approaches applied for classifying medical images do not adhere to this architecture as they rely on several pre- and post-processing steps. This shortcoming can be explained by the relatively small number of available labeled subjects, the high dimensionality of neuroimaging data, and difficulties in interpreting the results of deep learning methods. In this paper, we propose a simple 3D Convolutional Neural Networks and exploit its model parameters to tailor the end-to-end architecture for the diagnosis of Alzheimer's disease (AD). Our model can diagnose AD with an accuracy of 94.1% on the popular ADNI dataset using only MRI data, which outperforms the previous state-of-the-art. Based on the learned model, we identify the disease biomarkers, the results of which were in accordance with the literature. We further transfer the learned model to diagnose mild cognitive impairment (MCI), the prodromal stage of AD, which yield better results compared to other methods.</p>","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":"11046 ","pages":"337-345"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7440044/pdf/nihms-1617549.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38295532","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Developing Novel Weighted Correlation Kernels for Convolutional Neural Networks to Extract Hierarchical Functional Connectivities from fMRI for Disease Diagnosis. 为卷积神经网络开发新的加权相关核,从 fMRI 提取分层功能连接性用于疾病诊断。
Pub Date : 2018-09-01 Epub Date: 2018-09-15 DOI: 10.1007/978-3-030-00919-9_1
Biao Jie, Mingxia Liu, Chunfeng Lian, Feng Shi, Dinggang Shen

Functional magnetic resonance imaging (fMRI) has been widely applied to analysis and diagnosis of brain diseases, including Alzheimer's disease (AD) and its prodrome, i.e., mild cognitive impairment (MCI). Traditional methods usually construct connectivity networks (CNs) by simply calculating Pearson correlation coefficients (PCCs) between time series of brain regions, and then extract low-level network measures as features to train the learning model. However, the valuable observation information in network construction (e.g., specific contributions of different time points) and high-level (i.e., high-order) network properties are neglected in these methods. In this paper, we first define a novel weighted correlation kernel (called wc-kernel) to measure the correlation of brain regions, by which weighting factors are determined in a data-driven manner to characterize the contribution of each time point, thus conveying the richer interaction information of brain regions compared with the PCC method. Furthermore, we propose a wc-kernel based convolutional neural network (CNN) (called wck-CNN) framework for extracting the hierarchical (i.e., from low-order to high-order) functional connectivities for disease diagnosis, by using fMRI data. Specifically, we first define a layer to build dynamic CNs (DCNs) using the defined wc-kernels. Then, we define three layers to extract local (region specific), global (network specific) and temporal high-order properties from the constructed low-order functional connectivities as features for classification. Results on 174 subjects (a total of 563 scans) with rs-fMRI data from ADNI suggest that the our method can not only improve the performance compared with state-of-the-art methods, but also provide novel insights into the interaction patterns of brain activities and their changes in diseases.

功能磁共振成像(fMRI)已广泛应用于脑部疾病的分析和诊断,包括阿尔茨海默病(AD)及其前驱症状,即轻度认知障碍(MCI)。传统方法通常通过简单计算脑区时间序列之间的皮尔逊相关系数(PCC)来构建连接网络(CN),然后提取低层次的网络度量作为特征来训练学习模型。然而,这些方法忽略了网络构建过程中有价值的观察信息(如不同时间点的具体贡献)和高层次(即高阶)网络属性。本文首先定义了一种新的加权相关核(称为 wc-kernel)来测量脑区的相关性,通过数据驱动的方式确定加权因子来表征每个时间点的贡献,从而传达出与 PCC 方法相比更丰富的脑区交互信息。此外,我们还提出了一种基于 wc 核的卷积神经网络(CNN)(称为 wck-CNN)框架,利用 fMRI 数据提取分层(即从低阶到高阶)功能连接性,用于疾病诊断。具体来说,我们首先定义了一个层,利用定义的 wc 核构建动态 CN(DCN)。然后,我们定义了三层,从构建的低阶功能连接中提取局部(特定区域)、全局(特定网络)和时间高阶属性,作为分类特征。对 174 名受试者(共 563 次扫描)使用 ADNI 的 rs-fMRI 数据进行研究的结果表明,与最先进的方法相比,我们的方法不仅能提高性能,还能为大脑活动的交互模式及其在疾病中的变化提供新的见解。
{"title":"Developing Novel Weighted Correlation Kernels for Convolutional Neural Networks to Extract Hierarchical Functional Connectivities from fMRI for Disease Diagnosis.","authors":"Biao Jie, Mingxia Liu, Chunfeng Lian, Feng Shi, Dinggang Shen","doi":"10.1007/978-3-030-00919-9_1","DOIUrl":"10.1007/978-3-030-00919-9_1","url":null,"abstract":"<p><p>Functional magnetic resonance imaging (fMRI) has been widely applied to analysis and diagnosis of brain diseases, including Alzheimer's disease (AD) and its prodrome, <i>i.e.</i>, mild cognitive impairment (MCI). Traditional methods usually construct connectivity networks (CNs) by simply calculating Pearson correlation coefficients (PCCs) between time series of brain regions, and then extract low-level network measures as features to train the learning model. However, the valuable observation information in network construction (<i>e.g.</i>, specific contributions of different time points) and high-level (<i>i.e.</i>, high-order) network properties are neglected in these methods. In this paper, we first define a novel weighted correlation kernel (called wc-kernel) to measure the correlation of brain regions, by which weighting factors are determined in a data-driven manner to characterize the contribution of each time point, thus conveying the richer interaction information of brain regions compared with the PCC method. Furthermore, we propose a wc-kernel based convolutional neural network (CNN) (called wck-CNN) framework for extracting the hierarchical (<i>i.e.</i>, from low-order to high-order) functional connectivities for disease diagnosis, by using fMRI data. Specifically, we first define a layer to build dynamic CNs (DCNs) using the defined wc-kernels. Then, we define three layers to extract local (region specific), global (network specific) and temporal high-order properties from the constructed low-order functional connectivities as features for classification. Results on 174 subjects (a total of 563 scans) with rs-fMRI data from ADNI suggest that the our method can <i>not only</i> improve the performance compared with state-of-the-art methods, <i>but also</i> provide novel insights into the interaction patterns of brain activities and their changes in diseases.</p>","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":"11046 ","pages":"1-9"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6410567/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37215057","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Learning based Inter-Modality Image Registration Supervised by Intra-Modality Similarity. 基于模态相似性监督的深度学习模态间图像配准。
Pub Date : 2018-09-01 Epub Date: 2018-09-15 DOI: 10.1007/978-3-030-00919-9_7
Xiaohuan Cao, Jianhua Yang, Li Wang, Zhong Xue, Qian Wang, Dinggang Shen

Non-rigid inter-modality registration can facilitate accurate information fusion from different modalities, but it is challenging due to the very different image appearances across modalities. In this paper, we propose to train a non-rigid inter-modality image registration network, which can directly predict the transformation field from the input multimodal images, such as CT and MR images. In particular, the training of our inter-modality registration network is supervised by intra-modality similarity metric based on the available paired data, which is derived from a pre-aligned CT and MR dataset. Specifically, in the training stage, to register the input CT and MR images, their similarity is evaluated on the warped MR image and the MR image that is paired with the input CT. So that, the intra-modality similarity metric can be directly applied to measure whether the input CT and MR images are well registered. Moreover, we use the idea of dual-modality fashion, in which we measure the similarity on both CT modality and MR modality. In this way, the complementary anatomies in both modalities can be jointly considered to more accurately train the inter-modality registration network. In the testing stage, the trained inter-modality registration network can be directly applied to register the new multimodal images without any paired data. Experimental results have shown that, the proposed method can achieve promising accuracy and efficiency for the challenging non-rigid inter-modality registration task and also outperforms the state-of-the-art approaches.

非刚性模态间配准可以促进不同模态间信息的准确融合,但由于模态间图像外观差异很大,因此具有一定的挑战性。在本文中,我们提出训练一个非刚性的多模态图像配准网络,该网络可以直接从输入的多模态图像(如CT和MR图像)中预测变换场。特别是,我们的模态间配准网络的训练是由基于可用成对数据的模态内相似性度量来监督的,这些数据来自预对齐的CT和MR数据集。具体来说,在训练阶段,为了配准输入的CT和MR图像,在扭曲的MR图像和与输入CT配对的MR图像上评估它们的相似度。因此,可以直接应用模态内相似度度量来衡量输入的CT和MR图像是否配准良好。此外,我们使用双模态时尚的想法,其中我们测量CT模态和MR模态的相似性。这样,两种模态的互补解剖结构可以共同考虑,从而更准确地训练模态间配准网络。在测试阶段,训练好的多模态配准网络可以直接用于新的多模态图像的配准,而不需要任何配对数据。实验结果表明,对于具有挑战性的非刚性模态间配准任务,该方法具有较高的精度和效率,并且优于现有方法。
{"title":"Deep Learning based Inter-Modality Image Registration Supervised by Intra-Modality Similarity.","authors":"Xiaohuan Cao,&nbsp;Jianhua Yang,&nbsp;Li Wang,&nbsp;Zhong Xue,&nbsp;Qian Wang,&nbsp;Dinggang Shen","doi":"10.1007/978-3-030-00919-9_7","DOIUrl":"https://doi.org/10.1007/978-3-030-00919-9_7","url":null,"abstract":"<p><p>Non-rigid inter-modality registration can facilitate accurate information fusion from different modalities, but it is challenging due to the very different image appearances across modalities. In this paper, we propose to train a non-rigid inter-modality image registration network, which can directly predict the transformation field from the input multimodal images, such as CT and MR images. In particular, the training of our inter-modality registration network is supervised by intra-modality similarity metric based on the available paired data, which is derived from a pre-aligned CT and MR dataset. Specifically, in the training stage, to register the input CT and MR images, their similarity is evaluated on the <i>warped MR image</i> and <i>the MR image that is paired with the input CT</i>. So that, the intra-modality similarity metric can be directly applied to measure whether the input CT and MR images are well registered. Moreover, we use the idea of dual-modality fashion, in which we measure the similarity on both CT modality and MR modality. In this way, the complementary anatomies in both modalities can be jointly considered to more accurately train the inter-modality registration network. In the testing stage, the trained inter-modality registration network can be directly applied to register the new multimodal images without any paired data. Experimental results have shown that, the proposed method can achieve promising accuracy and efficiency for the challenging non-rigid inter-modality registration task and also outperforms the state-of-the-art approaches.</p>","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":"11046 ","pages":"55-63"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/978-3-030-00919-9_7","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37251892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 75
Early Diagnosis of Autism Disease by Multi-channel CNNs. 多通道细胞神经网络对自闭症的早期诊断。
Pub Date : 2018-09-01 Epub Date: 2018-09-15 DOI: 10.1007/978-3-030-00919-9_35
Guannan Li, Mingxia Liu, Quansen Sun, Dinggang Shen, Li Wang

Currently there are still no early biomarkers to detect infants with risk of autism spectrum disorder (ASD), which is mainly diagnosed based on behavior observations at three or four years old. Since intervention efforts may miss a critical developmental window after 2 years old, it is significant to identify imaging-based biomarkers for early diagnosis of ASD. Although some methods using magnetic resonance imaging (MRI) for brain disease prediction have been proposed in the last decade, few of them were developed for predicting ASD in early age. Inspired by deep multi-instance learning, in this paper, we propose a patch-level data-expanding strategy for multi-channel convolutional neural networks to automatically identify infants with risk of ASD in early age. Experiments were conducted on the National Database for Autism Research (NDAR), with results showing that our proposed method can significantly improve the performance of early diagnosis of ASD.

目前,仍然没有早期的生物标志物来检测患有自闭症谱系障碍(ASD)的婴儿,自闭症谱系疾病主要是根据三四岁时的行为观察来诊断的。由于干预工作可能会错过2岁后的关键发育窗口,因此识别基于成像的生物标志物对ASD的早期诊断具有重要意义。尽管在过去十年中已经提出了一些使用磁共振成像(MRI)预测脑部疾病的方法,但很少有人开发出用于预测早期ASD的方法。受深度多实例学习的启发,在本文中,我们为多通道卷积神经网络提出了一种补丁级数据扩展策略,以自动识别早期有ASD风险的婴儿。在国家自闭症研究数据库(NDAR)上进行了实验,结果表明,我们提出的方法可以显著提高ASD的早期诊断性能。
{"title":"Early Diagnosis of Autism Disease by Multi-channel CNNs.","authors":"Guannan Li, Mingxia Liu, Quansen Sun, Dinggang Shen, Li Wang","doi":"10.1007/978-3-030-00919-9_35","DOIUrl":"10.1007/978-3-030-00919-9_35","url":null,"abstract":"<p><p>Currently there are still no early biomarkers to detect infants with risk of autism spectrum disorder (ASD), which is mainly diagnosed based on behavior observations at three or four years old. Since intervention efforts may miss a critical developmental window after 2 years old, it is significant to identify imaging-based biomarkers for early diagnosis of ASD. Although some methods using magnetic resonance imaging (MRI) for brain disease prediction have been proposed in the last decade, few of them were developed for predicting ASD in early age. Inspired by deep multi-instance learning, in this paper, we propose a patch-level data-expanding strategy for multi-channel convolutional neural networks to automatically identify infants with risk of ASD in early age. Experiments were conducted on the National Database for Autism Research (NDAR), with results showing that our proposed method can significantly improve the performance of early diagnosis of ASD.</p>","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":"11046 ","pages":"303-309"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6235442/pdf/nihms-994933.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"36743556","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatic Accurate Infant Cerebellar Tissue Segmentation with Densely Connected Convolutional Network. 利用密集连接卷积网络实现婴儿小脑组织的自动精确分割。
Pub Date : 2018-09-01 Epub Date: 2018-09-15 DOI: 10.1007/978-3-030-00919-9_27
Jiawei Chen, Han Zhang, Dong Nie, Li Wang, Gang Li, Weili Lin, Dinggang Shen

The human cerebellum has been recognized as a key brain structure for motor control and cognitive function regulation. Investigation of brain functional development in the early life has recently been focusing on both cerebral and cerebellar development. Accurate segmentation of the infant cerebellum into different tissues is among the most important steps for quantitative development studies. However, this is extremely challenging due to the weak tissue contrast, extremely folded structures, and severe partial volume effect. To date, there are very few works touching infant cerebellum segmentation. We tackle this challenge by proposing a densely connected convolutional network to learn robust feature representations of different cerebellar tissues towards automatic and accurate segmentation. Specifically, we develop a novel deep neural network architecture by directly connecting all the layers to ensure maximum information flow even among distant layers in the network. This is distinct from all previous studies. Importantly, the outputs from all previous layers are passed to all subsequent layers as contextual features that can guide the segmentation. Our method achieved superior performance than other state-of-the-art methods when applied to Baby Connectome Project (BCP) data consisting of both 6- and 12-month-old infant brain images.

人类小脑已被公认为运动控制和认知功能调节的关键大脑结构。对早期大脑功能发育的研究最近集中在大脑和小脑的发育上。将婴儿小脑精确分割成不同的组织是定量发育研究的最重要步骤之一。然而,由于弱组织对比度、极度折叠的结构和严重的部分体积效应,这是极具挑战性的。到目前为止,很少有作品涉及婴儿小脑的分割。我们通过提出一种密集连接的卷积网络来学习不同小脑组织的鲁棒特征表示,以实现自动准确的分割,从而应对这一挑战。具体来说,我们通过直接连接所有层来开发一种新的深度神经网络架构,以确保即使在网络中的遥远层之间也能实现最大的信息流。这与以前的所有研究都不同。重要的是,来自所有先前层的输出被传递到所有后续层,作为可以指导分割的上下文特征。当应用于由6个月和12个月大的婴儿大脑图像组成的婴儿连接体项目(BCP)数据时,我们的方法比其他最先进的方法获得了更好的性能。
{"title":"Automatic Accurate Infant Cerebellar Tissue Segmentation with Densely Connected Convolutional Network.","authors":"Jiawei Chen,&nbsp;Han Zhang,&nbsp;Dong Nie,&nbsp;Li Wang,&nbsp;Gang Li,&nbsp;Weili Lin,&nbsp;Dinggang Shen","doi":"10.1007/978-3-030-00919-9_27","DOIUrl":"10.1007/978-3-030-00919-9_27","url":null,"abstract":"<p><p>The human cerebellum has been recognized as a key brain structure for motor control and cognitive function regulation. Investigation of brain functional development in the early life has recently been focusing on both cerebral and cerebellar development. Accurate segmentation of the infant cerebellum into different tissues is among the most important steps for quantitative development studies. However, this is extremely challenging due to the weak tissue contrast, extremely folded structures, and severe partial volume effect. To date, there are very few works touching infant cerebellum segmentation. We tackle this challenge by proposing a densely connected convolutional network to learn robust feature representations of different cerebellar tissues towards automatic and accurate segmentation. Specifically, we develop a novel deep neural network architecture by directly connecting all the layers to ensure maximum information flow even among distant layers in the network. This is distinct from all previous studies. Importantly, the outputs from all previous layers are passed to all subsequent layers as contextual features that can guide the segmentation. Our method achieved superior performance than other state-of-the-art methods when applied to Baby Connectome Project (BCP) data consisting of both 6- and 12-month-old infant brain images.</p>","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":"11046 ","pages":"233-240"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/978-3-030-00919-9_27","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"36624677","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
期刊
Machine learning in medical imaging. MLMI (Workshop)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1