首页 > 最新文献

Machine learning in medical imaging. MLMI (Workshop)最新文献

英文 中文
Deep Learning based Inter-Modality Image Registration Supervised by Intra-Modality Similarity. 基于模态相似性监督的深度学习模态间图像配准。
Pub Date : 2018-09-01 Epub Date: 2018-09-15 DOI: 10.1007/978-3-030-00919-9_7
Xiaohuan Cao, Jianhua Yang, Li Wang, Zhong Xue, Qian Wang, Dinggang Shen

Non-rigid inter-modality registration can facilitate accurate information fusion from different modalities, but it is challenging due to the very different image appearances across modalities. In this paper, we propose to train a non-rigid inter-modality image registration network, which can directly predict the transformation field from the input multimodal images, such as CT and MR images. In particular, the training of our inter-modality registration network is supervised by intra-modality similarity metric based on the available paired data, which is derived from a pre-aligned CT and MR dataset. Specifically, in the training stage, to register the input CT and MR images, their similarity is evaluated on the warped MR image and the MR image that is paired with the input CT. So that, the intra-modality similarity metric can be directly applied to measure whether the input CT and MR images are well registered. Moreover, we use the idea of dual-modality fashion, in which we measure the similarity on both CT modality and MR modality. In this way, the complementary anatomies in both modalities can be jointly considered to more accurately train the inter-modality registration network. In the testing stage, the trained inter-modality registration network can be directly applied to register the new multimodal images without any paired data. Experimental results have shown that, the proposed method can achieve promising accuracy and efficiency for the challenging non-rigid inter-modality registration task and also outperforms the state-of-the-art approaches.

非刚性模态间配准可以促进不同模态间信息的准确融合,但由于模态间图像外观差异很大,因此具有一定的挑战性。在本文中,我们提出训练一个非刚性的多模态图像配准网络,该网络可以直接从输入的多模态图像(如CT和MR图像)中预测变换场。特别是,我们的模态间配准网络的训练是由基于可用成对数据的模态内相似性度量来监督的,这些数据来自预对齐的CT和MR数据集。具体来说,在训练阶段,为了配准输入的CT和MR图像,在扭曲的MR图像和与输入CT配对的MR图像上评估它们的相似度。因此,可以直接应用模态内相似度度量来衡量输入的CT和MR图像是否配准良好。此外,我们使用双模态时尚的想法,其中我们测量CT模态和MR模态的相似性。这样,两种模态的互补解剖结构可以共同考虑,从而更准确地训练模态间配准网络。在测试阶段,训练好的多模态配准网络可以直接用于新的多模态图像的配准,而不需要任何配对数据。实验结果表明,对于具有挑战性的非刚性模态间配准任务,该方法具有较高的精度和效率,并且优于现有方法。
{"title":"Deep Learning based Inter-Modality Image Registration Supervised by Intra-Modality Similarity.","authors":"Xiaohuan Cao,&nbsp;Jianhua Yang,&nbsp;Li Wang,&nbsp;Zhong Xue,&nbsp;Qian Wang,&nbsp;Dinggang Shen","doi":"10.1007/978-3-030-00919-9_7","DOIUrl":"https://doi.org/10.1007/978-3-030-00919-9_7","url":null,"abstract":"<p><p>Non-rigid inter-modality registration can facilitate accurate information fusion from different modalities, but it is challenging due to the very different image appearances across modalities. In this paper, we propose to train a non-rigid inter-modality image registration network, which can directly predict the transformation field from the input multimodal images, such as CT and MR images. In particular, the training of our inter-modality registration network is supervised by intra-modality similarity metric based on the available paired data, which is derived from a pre-aligned CT and MR dataset. Specifically, in the training stage, to register the input CT and MR images, their similarity is evaluated on the <i>warped MR image</i> and <i>the MR image that is paired with the input CT</i>. So that, the intra-modality similarity metric can be directly applied to measure whether the input CT and MR images are well registered. Moreover, we use the idea of dual-modality fashion, in which we measure the similarity on both CT modality and MR modality. In this way, the complementary anatomies in both modalities can be jointly considered to more accurately train the inter-modality registration network. In the testing stage, the trained inter-modality registration network can be directly applied to register the new multimodal images without any paired data. Experimental results have shown that, the proposed method can achieve promising accuracy and efficiency for the challenging non-rigid inter-modality registration task and also outperforms the state-of-the-art approaches.</p>","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":"11046 ","pages":"55-63"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/978-3-030-00919-9_7","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37251892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 75
Automatic Accurate Infant Cerebellar Tissue Segmentation with Densely Connected Convolutional Network. 利用密集连接卷积网络实现婴儿小脑组织的自动精确分割。
Pub Date : 2018-09-01 Epub Date: 2018-09-15 DOI: 10.1007/978-3-030-00919-9_27
Jiawei Chen, Han Zhang, Dong Nie, Li Wang, Gang Li, Weili Lin, Dinggang Shen

The human cerebellum has been recognized as a key brain structure for motor control and cognitive function regulation. Investigation of brain functional development in the early life has recently been focusing on both cerebral and cerebellar development. Accurate segmentation of the infant cerebellum into different tissues is among the most important steps for quantitative development studies. However, this is extremely challenging due to the weak tissue contrast, extremely folded structures, and severe partial volume effect. To date, there are very few works touching infant cerebellum segmentation. We tackle this challenge by proposing a densely connected convolutional network to learn robust feature representations of different cerebellar tissues towards automatic and accurate segmentation. Specifically, we develop a novel deep neural network architecture by directly connecting all the layers to ensure maximum information flow even among distant layers in the network. This is distinct from all previous studies. Importantly, the outputs from all previous layers are passed to all subsequent layers as contextual features that can guide the segmentation. Our method achieved superior performance than other state-of-the-art methods when applied to Baby Connectome Project (BCP) data consisting of both 6- and 12-month-old infant brain images.

人类小脑已被公认为运动控制和认知功能调节的关键大脑结构。对早期大脑功能发育的研究最近集中在大脑和小脑的发育上。将婴儿小脑精确分割成不同的组织是定量发育研究的最重要步骤之一。然而,由于弱组织对比度、极度折叠的结构和严重的部分体积效应,这是极具挑战性的。到目前为止,很少有作品涉及婴儿小脑的分割。我们通过提出一种密集连接的卷积网络来学习不同小脑组织的鲁棒特征表示,以实现自动准确的分割,从而应对这一挑战。具体来说,我们通过直接连接所有层来开发一种新的深度神经网络架构,以确保即使在网络中的遥远层之间也能实现最大的信息流。这与以前的所有研究都不同。重要的是,来自所有先前层的输出被传递到所有后续层,作为可以指导分割的上下文特征。当应用于由6个月和12个月大的婴儿大脑图像组成的婴儿连接体项目(BCP)数据时,我们的方法比其他最先进的方法获得了更好的性能。
{"title":"Automatic Accurate Infant Cerebellar Tissue Segmentation with Densely Connected Convolutional Network.","authors":"Jiawei Chen,&nbsp;Han Zhang,&nbsp;Dong Nie,&nbsp;Li Wang,&nbsp;Gang Li,&nbsp;Weili Lin,&nbsp;Dinggang Shen","doi":"10.1007/978-3-030-00919-9_27","DOIUrl":"10.1007/978-3-030-00919-9_27","url":null,"abstract":"<p><p>The human cerebellum has been recognized as a key brain structure for motor control and cognitive function regulation. Investigation of brain functional development in the early life has recently been focusing on both cerebral and cerebellar development. Accurate segmentation of the infant cerebellum into different tissues is among the most important steps for quantitative development studies. However, this is extremely challenging due to the weak tissue contrast, extremely folded structures, and severe partial volume effect. To date, there are very few works touching infant cerebellum segmentation. We tackle this challenge by proposing a densely connected convolutional network to learn robust feature representations of different cerebellar tissues towards automatic and accurate segmentation. Specifically, we develop a novel deep neural network architecture by directly connecting all the layers to ensure maximum information flow even among distant layers in the network. This is distinct from all previous studies. Importantly, the outputs from all previous layers are passed to all subsequent layers as contextual features that can guide the segmentation. Our method achieved superior performance than other state-of-the-art methods when applied to Baby Connectome Project (BCP) data consisting of both 6- and 12-month-old infant brain images.</p>","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":"11046 ","pages":"233-240"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/978-3-030-00919-9_27","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"36624677","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Deep Learning for Fast and Spatially-Constrained Tissue Quantification from Highly-Undersampled Data in Magnetic Resonance Fingerprinting (MRF). 磁共振指纹(MRF)中基于高度欠采样数据的快速和空间受限组织定量的深度学习。
Pub Date : 2018-09-01 Epub Date: 2018-09-15 DOI: 10.1007/978-3-030-00919-9_46
Zhenghan Fang, Yong Chen, Mingxia Liu, Yiqiang Zhan, Weili Lin, Dinggang Shen

Magnetic resonance fingerprinting (MRF) is a novel quantitative imaging technique that allows simultaneous measurements of multiple important tissue properties in human body, e.g., T1 and T2 relaxation times. While MRF has demonstrated better scan efficiency as compared to conventional quantitative imaging techniques, further acceleration is desired, especially for certain subjects such as infants and young children. However, the conventional MRF framework only uses a simple template matching algorithm to quantify tissue properties, without considering the underlying spatial association among pixels in MRF signals. In this work, we aim to accelerate MRF acquisition by developing a new post-processing method that allows accurate quantification of tissue properties with fewer sampling data. Moreover, to improve the accuracy in quantification, the MRF signals from multiple surrounding pixels are used together to better estimate tissue properties at the central target pixel, which was simply done with the signal only from the target pixel in the original template matching method. In particular, a deep learning model, i.e., U-Net, is used to learn the mapping from the MRF signal evolutions to the tissue property map. To further reduce the network size of U-Net, principal component analysis (PCA) is used to reduce the dimensionality of the input signals. Based on in vivo brain data, our method can achieve accurate quantification for both T1 and T2 by using only 25% time points, which are four times of acceleration in data acquisition compared to the original template matching method.

磁共振指纹(MRF)是一种新型的定量成像技术,可以同时测量人体多种重要的组织特性,例如T1和T2弛豫时间。虽然与传统的定量成像技术相比,磁共振成像已经证明了更好的扫描效率,但需要进一步的加速,特别是对于某些特定的受试者,如婴儿和幼儿。然而,传统的MRF框架仅使用简单的模板匹配算法来量化组织特性,而没有考虑MRF信号中像素之间的潜在空间关联。在这项工作中,我们的目标是通过开发一种新的后处理方法来加速磁共振成像的获取,这种方法可以用更少的采样数据准确地定量组织特性。此外,为了提高量化的准确性,将来自多个周围像素的磁共振成像信号一起使用,以更好地估计中心目标像素处的组织特性,而原始模板匹配方法只是简单地使用来自目标像素的信号。特别地,使用深度学习模型,即U-Net,来学习从MRF信号演变到组织属性映射的映射。为了进一步减小U-Net的网络规模,采用主成分分析(PCA)对输入信号进行降维处理。基于活体脑数据,我们的方法仅使用25%的时间点就能实现T1和T2的准确定量,与原始模板匹配方法相比,数据采集速度提高了4倍。
{"title":"Deep Learning for Fast and Spatially-Constrained Tissue Quantification from Highly-Undersampled Data in Magnetic Resonance Fingerprinting (MRF).","authors":"Zhenghan Fang,&nbsp;Yong Chen,&nbsp;Mingxia Liu,&nbsp;Yiqiang Zhan,&nbsp;Weili Lin,&nbsp;Dinggang Shen","doi":"10.1007/978-3-030-00919-9_46","DOIUrl":"https://doi.org/10.1007/978-3-030-00919-9_46","url":null,"abstract":"<p><p>Magnetic resonance fingerprinting (MRF) is a novel quantitative imaging technique that allows simultaneous measurements of multiple important tissue properties in human body, e.g., T1 and T2 relaxation times. While MRF has demonstrated better scan efficiency as compared to conventional quantitative imaging techniques, further acceleration is desired, especially for certain subjects such as infants and young children. However, the conventional MRF framework only uses a simple template matching algorithm to quantify tissue properties, without considering the underlying spatial association among pixels in MRF signals. In this work, we aim to accelerate MRF acquisition by developing a new post-processing method that allows accurate quantification of tissue properties with <i>fewer</i> sampling data. Moreover, to improve the accuracy in quantification, the MRF signals from multiple surrounding pixels are used together to better estimate tissue properties at the central target pixel, which was simply done with the signal only from the target pixel in the original template matching method. In particular, a deep learning model, i.e., U-Net, is used to learn the mapping from the MRF signal evolutions to the tissue property map. To further reduce the network size of U-Net, principal component analysis (PCA) is used to reduce the dimensionality of the input signals. Based on <i>in vivo</i> brain data, our method can achieve accurate quantification for both T1 and T2 by using only 25% time points, which are <i>four times</i> of acceleration in data acquisition compared to the original template matching method.</p>","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":"11046 ","pages":"398-405"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/978-3-030-00919-9_46","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37106173","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Machine Learning in Medical Imaging: 9th International Workshop, MLMI 2018, Held in Conjunction with MICCAI 2018, Granada, Spain, September 16, 2018, Proceedings 医学成像中的机器学习:第九届国际研讨会,MLMI 2018,与MICCAI 2018一起举行,西班牙格拉纳达,2018年9月16日,会议录
Pub Date : 2018-01-01 DOI: 10.1007/978-3-030-00919-9
Yinghuan Shi, Heung-Il Suk, Mingxia Liu
{"title":"Machine Learning in Medical Imaging: 9th International Workshop, MLMI 2018, Held in Conjunction with MICCAI 2018, Granada, Spain, September 16, 2018, Proceedings","authors":"Yinghuan Shi, Heung-Il Suk, Mingxia Liu","doi":"10.1007/978-3-030-00919-9","DOIUrl":"https://doi.org/10.1007/978-3-030-00919-9","url":null,"abstract":"","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":"47 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82233297","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Machine Learning for Large-Scale Quality Control of 3D Shape Models in Neuroimaging. 神经成像三维形状模型大规模质量控制的机器学习
Pub Date : 2017-09-01 Epub Date: 2017-09-07 DOI: 10.1007/978-3-319-67389-9_43
Dmitry Petrov, Boris A Gutman, Shih-Hua Julie Yu, Theo G M van Erp, Jessica A Turner, Lianne Schmaal, Dick Veltman, Lei Wang, Kathryn Alpert, Dmitry Isaev, Artemis Zavaliangos-Petropulu, Christopher R K Ching, Vince Calhoun, David Glahn, Theodore D Satterthwaite, Ole Andreas Andreasen, Stefan Borgwardt, Fleur Howells, Nynke Groenewold, Aristotle Voineskos, Joaquim Radua, Steven G Potkin, Benedicto Crespo-Facorro, Diana Tordesillas-Gutiérrez, Li Shen, Irina Lebedeva, Gianfranco Spalletta, Gary Donohoe, Peter Kochunov, Pedro G P Rosa, Anthony James, Udo Dannlowski, Bernhard T Baune, André Aleman, Ian H Gotlib, Henrik Walter, Martin Walter, Jair C Soares, Stefan Ehrlich, Ruben C Gur, N Trung Doan, Ingrid Agartz, Lars T Westlye, Fabienne Harrisberger, Anita Riecher-Rössler, Anne Uhlmann, Dan J Stein, Erin W Dickie, Edith Pomarol-Clotet, Paola Fuentes-Claramonte, Erick Jorge Canales-Rodríguez, Raymond Salvador, Alexander J Huang, Roberto Roiz-Santiañez, Shan Cong, Alexander Tomyshev, Fabrizio Piras, Daniela Vecchio, Nerisa Banaj, Valentina Ciullo, Elliot Hong, Geraldo Busatto, Marcus V Zanetti, Mauricio H Serpa, Simon Cervenka, Sinead Kelly, Dominik Grotegerd, Matthew D Sacchet, Ilya M Veer, Meng Li, Mon-Ju Wu, Benson Irungu, Esther Walton, Paul M Thompson

As very large studies of complex neuroimaging phenotypes become more common, human quality assessment of MRI-derived data remains one of the last major bottlenecks. Few attempts have so far been made to address this issue with machine learning. In this work, we optimize predictive models of quality for meshes representing deep brain structure shapes. We use standard vertex-wise and global shape features computed homologously across 19 cohorts and over 7500 human-rated subjects, training kernelized Support Vector Machine and Gradient Boosted Decision Trees classifiers to detect meshes of failing quality. Our models generalize across datasets and diseases, reducing human workload by 30-70%, or equivalently hundreds of human rater hours for datasets of comparable size, with recall rates approaching inter-rater reliability.

随着复杂神经影像表型的大型研究越来越常见,对核磁共振成像数据进行人工质量评估仍是最后的主要瓶颈之一。迄今为止,很少有人尝试用机器学习来解决这个问题。在这项工作中,我们优化了代表大脑深层结构形状的网格的质量预测模型。我们使用在 19 个队列和超过 7500 名人类评定对象中同源计算的标准顶点和全局形状特征,训练核化支持向量机和梯度提升决策树分类器来检测质量不合格的网格。我们的模型在不同的数据集和疾病中具有通用性,可减少 30% 到 70% 的人工工作量,对于规模相当的数据集来说,相当于减少了数百个人工评分时间,召回率接近评分者之间的可靠性。
{"title":"Machine Learning for Large-Scale Quality Control of 3D Shape Models in Neuroimaging.","authors":"Dmitry Petrov, Boris A Gutman, Shih-Hua Julie Yu, Theo G M van Erp, Jessica A Turner, Lianne Schmaal, Dick Veltman, Lei Wang, Kathryn Alpert, Dmitry Isaev, Artemis Zavaliangos-Petropulu, Christopher R K Ching, Vince Calhoun, David Glahn, Theodore D Satterthwaite, Ole Andreas Andreasen, Stefan Borgwardt, Fleur Howells, Nynke Groenewold, Aristotle Voineskos, Joaquim Radua, Steven G Potkin, Benedicto Crespo-Facorro, Diana Tordesillas-Gutiérrez, Li Shen, Irina Lebedeva, Gianfranco Spalletta, Gary Donohoe, Peter Kochunov, Pedro G P Rosa, Anthony James, Udo Dannlowski, Bernhard T Baune, André Aleman, Ian H Gotlib, Henrik Walter, Martin Walter, Jair C Soares, Stefan Ehrlich, Ruben C Gur, N Trung Doan, Ingrid Agartz, Lars T Westlye, Fabienne Harrisberger, Anita Riecher-Rössler, Anne Uhlmann, Dan J Stein, Erin W Dickie, Edith Pomarol-Clotet, Paola Fuentes-Claramonte, Erick Jorge Canales-Rodríguez, Raymond Salvador, Alexander J Huang, Roberto Roiz-Santiañez, Shan Cong, Alexander Tomyshev, Fabrizio Piras, Daniela Vecchio, Nerisa Banaj, Valentina Ciullo, Elliot Hong, Geraldo Busatto, Marcus V Zanetti, Mauricio H Serpa, Simon Cervenka, Sinead Kelly, Dominik Grotegerd, Matthew D Sacchet, Ilya M Veer, Meng Li, Mon-Ju Wu, Benson Irungu, Esther Walton, Paul M Thompson","doi":"10.1007/978-3-319-67389-9_43","DOIUrl":"10.1007/978-3-319-67389-9_43","url":null,"abstract":"<p><p>As very large studies of complex neuroimaging phenotypes become more common, human quality assessment of MRI-derived data remains one of the last major bottlenecks. Few attempts have so far been made to address this issue with machine learning. In this work, we optimize predictive models of quality for meshes representing deep brain structure shapes. We use standard vertex-wise and global shape features computed homologously across 19 cohorts and over 7500 human-rated subjects, training kernelized Support Vector Machine and Gradient Boosted Decision Trees classifiers to detect meshes of failing quality. Our models generalize across datasets and diseases, reducing human workload by 30-70%, or equivalently hundreds of human rater hours for datasets of comparable size, with recall rates approaching inter-rater reliability.</p>","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":"10541 ","pages":"371-378"},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6049825/pdf/nihms980690.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"36334965","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Inter-subject Similarity Guided Brain Network Modeling for MCI Diagnosis. 主题间相似性引导的MCI诊断脑网络建模。
Pub Date : 2017-09-01 Epub Date: 2017-09-07 DOI: 10.1007/978-3-319-67389-9_20
Yu Zhang, Han Zhang, Xiaobo Chen, Mingxia Liu, Xiaofeng Zhu, Dinggang Shen

Sparse representation-based brain network modeling, although popular, often results in relatively large inter-subject variability in network structures. This inevitably makes it difficult for inter-subject comparison, thus eventually deteriorating the generalization capability of personalized disease diagnosis. Accordingly, group sparse representation has been proposed to alleviate such limitation by jointly estimating connectivity weights for all subjects. However, the constructed brain networks based on this method often fail in providing satisfactory separability between the subjects from different groups (e.g., patients vs. normal controls), which will also affect the performance of computer-aided disease diagnosis. Based on the hypothesis that subjects from the same group should have larger similarity in their functional connectivity (FC) patterns than subjects from other groups, we propose an "inter-subject FC similarity-guided" group sparse network modeling method. In this method, we explicitly include the inter-subject FC similarity as a constraint to conduct group-wise FC network modeling, while retaining sufficient between-group differences in the resultant FC networks. This improves the separability of brain functional networks between different groups, thus facilitating better personalized brain disease diagnosis. Specifically, the inter-subject FC similarity is roughly estimated by comparing the Pearson's correlation based FC patterns of each brain region to other regions for each pair of the subjects. Then, this is implemented as an additional weighting term to ensure the adequate inter-subject FC differences between the subjects from different groups. Of note, our method retains the group sparsity constraint to ensure the overall consistency of the resultant individual brain networks. Experimental results show that our method achieves a balanced trade-off by not only generating the individually consistent FC networks, but also effectively maintaining the necessary group difference, thereby significantly improving connectomics-based diagnosis for mild cognitive impairment (MCI).

基于稀疏表示的脑网络建模虽然很流行,但往往会导致网络结构中相对较大的主体间变异性。这必然造成学科间比较困难,最终降低了个性化疾病诊断的泛化能力。因此,提出了组稀疏表示,通过联合估计所有主题的连通性权重来缓解这种限制。然而,基于该方法构建的脑网络往往不能提供不同组(如患者与正常对照)受试者之间令人满意的可分离性,这也会影响计算机辅助疾病诊断的性能。基于同一群体被试的功能连接模式相似性大于其他群体被试的假设,提出了一种“主体间FC相似性引导”的群体稀疏网络建模方法。在这种方法中,我们明确地将主体间FC相似性作为约束来进行群体智能FC网络建模,同时在最终的FC网络中保留足够的组间差异。这提高了不同群体之间脑功能网络的可分离性,从而促进更好的个性化脑部疾病诊断。具体来说,通过比较每对受试者的每个脑区与其他脑区基于Pearson’s相关性的FC模式,大致估计受试者间FC相似性。然后,将其作为一个额外的加权项来实现,以确保不同组的受试者之间有足够的受试者间FC差异。值得注意的是,我们的方法保留了组稀疏性约束,以确保所得个体大脑网络的整体一致性。实验结果表明,该方法既能生成个体一致的FC网络,又能有效维持必要的组间差异,从而显著提高基于连接组学的轻度认知障碍(MCI)诊断。
{"title":"Inter-subject Similarity Guided Brain Network Modeling for MCI Diagnosis.","authors":"Yu Zhang,&nbsp;Han Zhang,&nbsp;Xiaobo Chen,&nbsp;Mingxia Liu,&nbsp;Xiaofeng Zhu,&nbsp;Dinggang Shen","doi":"10.1007/978-3-319-67389-9_20","DOIUrl":"https://doi.org/10.1007/978-3-319-67389-9_20","url":null,"abstract":"<p><p>Sparse representation-based brain network modeling, although popular, often results in relatively large inter-subject variability in network structures. This inevitably makes it difficult for inter-subject comparison, thus eventually deteriorating the generalization capability of personalized disease diagnosis. Accordingly, group sparse representation has been proposed to alleviate such limitation by jointly estimating connectivity weights for all subjects. However, the constructed brain networks based on this method often fail in providing satisfactory separability between the subjects from <i>different groups</i> (e.g., patients <i>vs</i>. normal controls), which will also affect the performance of computer-aided disease diagnosis. Based on the hypothesis that subjects from the same group should have larger similarity in their functional connectivity (FC) patterns than subjects from other groups, we propose an \"inter-subject FC similarity-guided\" group sparse network modeling method. In this method, we explicitly include the inter-subject FC similarity as a constraint to conduct group-wise FC network modeling, while retaining sufficient between-group differences in the resultant FC networks. This improves the separability of brain functional networks between different groups, thus facilitating better personalized brain disease diagnosis. Specifically, the inter-subject FC similarity is roughly estimated by comparing the Pearson's correlation based FC patterns of each brain region to other regions for each pair of the subjects. Then, this is implemented as an additional weighting term to ensure the adequate inter-subject FC differences between the subjects from different groups. Of note, our method retains the group sparsity constraint to ensure the overall consistency of the resultant individual brain networks. Experimental results show that our method achieves a balanced trade-off by <i>not only</i> generating the individually consistent FC networks, <i>but also</i> effectively maintaining the necessary group difference, thereby significantly improving connectomics-based diagnosis for mild cognitive impairment (MCI).</p>","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":"10541 ","pages":"168-175"},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/978-3-319-67389-9_20","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"36585746","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Structural Connectivity Guided Sparse Effective Connectivity for MCI Identification. 基于结构连通性的稀疏有效连通性MCI识别。
Pub Date : 2017-09-01 Epub Date: 2017-09-07 DOI: 10.1007/978-3-319-67389-9_35
Yang Li, Jingyu Liu, Meilin Luo, Ke Li, Pew-Thian Yap, Minjeong Kim, Chong-Yaw Wee, Dinggang Shen

Recent advances in network modelling techniques have enabled the study of neurological disorders at a whole-brain level based on functional connectivity inferred from resting-state magnetic resonance imaging (rs-fMRI) scan possible. However, constructing a directed effective connectivity, which provides a more comprehensive characterization of functional interactions among the brain regions, is still a challenging task particularly when the ultimate goal is to identify disease associated brain functional interaction anomalies. In this paper, we propose a novel method for inferring effective connectivity from multimodal neuroimaging data for brain disease classification. Specifically, we apply a newly devised weighted sparse regression model on rs-fMRI data to determine the network structure of effective connectivity with the guidance from diffusion tensor imaging (DTI) data. We further employ a regression algorithm to estimate the effective connectivity strengths based on the previously identified network structure. We finally utilize a bagging classifier to evaluate the performance of the proposed sparse effective connectivity network through identifying mild cognitive impairment from healthy aging.

网络建模技术的最新进展使得基于静息状态磁共振成像(rs-fMRI)扫描推断的功能连通性在全脑水平上研究神经系统疾病成为可能。然而,构建一个定向有效的连接,提供一个更全面的表征脑区域之间的功能相互作用,仍然是一个具有挑战性的任务,特别是当最终目标是识别疾病相关的脑功能相互作用异常。在本文中,我们提出了一种从多模态神经成像数据推断有效连接的新方法,用于脑疾病分类。具体来说,我们在扩散张量成像(DTI)数据的指导下,对rs-fMRI数据应用了一种新设计的加权稀疏回归模型来确定有效连通性的网络结构。我们进一步采用回归算法来估计基于先前识别的网络结构的有效连通性强度。最后,我们利用bagging分类器来评估所提出的稀疏有效连接网络的性能,通过识别健康老龄化的轻度认知损伤。
{"title":"Structural Connectivity Guided Sparse Effective Connectivity for MCI Identification.","authors":"Yang Li,&nbsp;Jingyu Liu,&nbsp;Meilin Luo,&nbsp;Ke Li,&nbsp;Pew-Thian Yap,&nbsp;Minjeong Kim,&nbsp;Chong-Yaw Wee,&nbsp;Dinggang Shen","doi":"10.1007/978-3-319-67389-9_35","DOIUrl":"https://doi.org/10.1007/978-3-319-67389-9_35","url":null,"abstract":"<p><p>Recent advances in network modelling techniques have enabled the study of neurological disorders at a whole-brain level based on functional connectivity inferred from resting-state magnetic resonance imaging (rs-fMRI) scan possible. However, constructing a directed effective connectivity, which provides a more comprehensive characterization of functional interactions among the brain regions, is still a challenging task particularly when the ultimate goal is to identify disease associated brain functional interaction anomalies. In this paper, we propose a novel method for inferring effective connectivity from multimodal neuroimaging data for brain disease classification. Specifically, we apply a newly devised weighted sparse regression model on rs-fMRI data to determine the network structure of effective connectivity with the guidance from diffusion tensor imaging (DTI) data. We further employ a regression algorithm to estimate the effective connectivity strengths based on the previously identified network structure. We finally utilize a bagging classifier to evaluate the performance of the proposed sparse effective connectivity network through identifying mild cognitive impairment from healthy aging.</p>","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":"10541 ","pages":"299-306"},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/978-3-319-67389-9_35","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35682031","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Gradient Boosted Trees for Corrective Learning. 用于纠正学习的梯度提升树。
Pub Date : 2017-09-01 Epub Date: 2017-09-07 DOI: 10.1007/978-3-319-67389-9_24
Baris U Oguz, Russell T Shinohara, Paul A Yushkevich, Ipek Oguz

Random forests (RF) have long been a widely popular method in medical image analysis. Meanwhile, the closely related gradient boosted trees (GBT) have not become a mainstream tool in medical imaging despite their attractive performance, perhaps due to their computational cost. In this paper, we leverage the recent availability of an efficient open-source GBT implementation to illustrate the GBT method in a corrective learning framework, in application to the segmentation of the caudate nucleus, putamen and hippocampus. The size and shape of these structures are used to derive important biomarkers in many neurological and psychiatric conditions. However, the large variability in deep gray matter appearance makes their automated segmentation from MRI scans a challenging task. We propose using GBT to improve existing segmentation methods. We begin with an existing 'host' segmentation method to create an estimate surface. Based on this estimate, a surface-based sampling scheme is used to construct a set of candidate locations. GBT models are trained on features derived from the candidate locations, including spatial coordinates, image intensity, texture, and gradient magnitude. The classification probabilities from the GBT models are used to calculate a final surface estimate. The method is evaluated on a public dataset, with a 2-fold cross-validation. We use a multi-atlas approach and FreeSurfer as host segmentation methods. The mean reduction in surface distance error metric for FreeSurfer was 0.2 - 0.3 mm, whereas for multi-atlas segmentation, it was 0.1mm for each of caudate, putamen and hippocampus. Importantly, our approach outperformed an RF model trained on the same features (p < 0.05 on all measures). Our method is readily generalizable and can be applied to a wide range of medical image segmentation problems and allows any segmentation method to be used as input.

随机森林(RF)长期以来一直是医学图像分析中广泛流行的方法。同时,密切相关的梯度增强树(GBT)尽管具有诱人的性能,但可能由于其计算成本,尚未成为医学成像的主流工具。在本文中,我们利用最近有效的开源GBT实现来说明纠正学习框架中的GBT方法,该方法应用于尾状核、壳核和海马的分割。这些结构的大小和形状被用来推导许多神经和精神疾病中的重要生物标志物。然而,深灰质外观的巨大可变性使其从MRI扫描中自动分割成为一项具有挑战性的任务。我们建议使用GBT来改进现有的分割方法。我们从现有的“宿主”分割方法开始创建估计曲面。基于该估计,使用基于表面的采样方案来构建一组候选位置。GBT模型基于从候选位置导出的特征进行训练,包括空间坐标、图像强度、纹理和梯度大小。来自GBT模型的分类概率用于计算最终表面估计。该方法在公共数据集上进行了评估,并进行了2次交叉验证。我们使用多图谱方法和FreeSurfer作为主机分割方法。FreeSurfer的表面距离误差度量的平均减少量为0.2-0.3mm,而对于多图谱分割,尾状核、壳核和海马体的平均减少度为0.1mm。重要的是,我们的方法优于在相同特征上训练的RF模型(所有测量值均<0.05)。我们的方法很容易推广,可以应用于广泛的医学图像分割问题,并允许使用任何分割方法作为输入。
{"title":"Gradient Boosted Trees for Corrective Learning.","authors":"Baris U Oguz,&nbsp;Russell T Shinohara,&nbsp;Paul A Yushkevich,&nbsp;Ipek Oguz","doi":"10.1007/978-3-319-67389-9_24","DOIUrl":"10.1007/978-3-319-67389-9_24","url":null,"abstract":"<p><p>Random forests (RF) have long been a widely popular method in medical image analysis. Meanwhile, the closely related gradient boosted trees (GBT) have not become a mainstream tool in medical imaging despite their attractive performance, perhaps due to their computational cost. In this paper, we leverage the recent availability of an efficient open-source GBT implementation to illustrate the GBT method in a corrective learning framework, in application to the segmentation of the caudate nucleus, putamen and hippocampus. The size and shape of these structures are used to derive important biomarkers in many neurological and psychiatric conditions. However, the large variability in deep gray matter appearance makes their automated segmentation from MRI scans a challenging task. We propose using GBT to improve existing segmentation methods. We begin with an existing 'host' segmentation method to create an estimate surface. Based on this estimate, a surface-based sampling scheme is used to construct a set of candidate locations. GBT models are trained on features derived from the candidate locations, including spatial coordinates, image intensity, texture, and gradient magnitude. The classification probabilities from the GBT models are used to calculate a final surface estimate. The method is evaluated on a public dataset, with a 2-fold cross-validation. We use a multi-atlas approach and FreeSurfer as host segmentation methods. The mean reduction in surface distance error metric for FreeSurfer was 0.2 - 0.3 mm, whereas for multi-atlas segmentation, it was 0.1mm for each of caudate, putamen and hippocampus. Importantly, our approach outperformed an RF model trained on the same features (<i>p</i> < 0.05 on all measures). Our method is readily generalizable and can be applied to a wide range of medical image segmentation problems and allows any segmentation method to be used as input.</p>","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":"10541 ","pages":"203-211"},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/978-3-319-67389-9_24","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"36589945","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Identifying Autism from Resting-State fMRI Using Long Short-Term Memory Networks. 利用长短期记忆网络从静息状态fMRI识别自闭症。
Pub Date : 2017-09-01 Epub Date: 2017-09-07 DOI: 10.1007/978-3-319-67389-9_42
Nicha C Dvornek, Pamela Ventola, Kevin A Pelphrey, James S Duncan

Functional magnetic resonance imaging (fMRI) has helped characterize the pathophysiology of autism spectrum disorders (ASD) and carries promise for producing objective biomarkers for ASD. Recent work has focused on deriving ASD biomarkers from resting-state functional connectivity measures. However, current efforts that have identified ASD with high accuracy were limited to homogeneous, small datasets, while classification results for heterogeneous, multi-site data have shown much lower accuracy. In this paper, we propose the use of recurrent neural networks with long short-term memory (LSTMs) for classification of individuals with ASD and typical controls directly from the resting-state fMRI time-series. We used the entire large, multi-site Autism Brain Imaging Data Exchange (ABIDE) I dataset for training and testing the LSTM models. Under a cross-validation framework, we achieved classification accuracy of 68.5%, which is 9% higher than previously reported methods that used fMRI data from the whole ABIDE cohort. Finally, we presented interpretation of the trained LSTM weights, which highlight potential functional networks and regions that are known to be implicated in ASD.

功能磁共振成像(fMRI)有助于表征自闭症谱系障碍(ASD)的病理生理特征,并有望为ASD提供客观的生物标志物。最近的工作集中在从静息状态功能连接测量中获得ASD生物标志物。然而,目前对ASD进行高精度识别的努力仅限于同质的小数据集,而对异质的多位点数据的分类结果显示准确率要低得多。在本文中,我们提出使用具有长短期记忆的递归神经网络(LSTMs)直接从静息状态fMRI时间序列中对ASD患者和典型对照进行分类。我们使用了整个大型、多站点的自闭症脑成像数据交换(ABIDE) I数据集来训练和测试LSTM模型。在交叉验证框架下,我们实现了68.5%的分类准确率,比之前报道的使用整个队列fMRI数据的方法高9%。最后,我们提出了对训练的LSTM权重的解释,它突出了已知与ASD有关的潜在功能网络和区域。
{"title":"Identifying Autism from Resting-State fMRI Using Long Short-Term Memory Networks.","authors":"Nicha C Dvornek,&nbsp;Pamela Ventola,&nbsp;Kevin A Pelphrey,&nbsp;James S Duncan","doi":"10.1007/978-3-319-67389-9_42","DOIUrl":"https://doi.org/10.1007/978-3-319-67389-9_42","url":null,"abstract":"<p><p>Functional magnetic resonance imaging (fMRI) has helped characterize the pathophysiology of autism spectrum disorders (ASD) and carries promise for producing objective biomarkers for ASD. Recent work has focused on deriving ASD biomarkers from resting-state functional connectivity measures. However, current efforts that have identified ASD with high accuracy were limited to homogeneous, small datasets, while classification results for heterogeneous, multi-site data have shown much lower accuracy. In this paper, we propose the use of recurrent neural networks with long short-term memory (LSTMs) for classification of individuals with ASD and typical controls directly from the resting-state fMRI time-series. We used the entire large, multi-site Autism Brain Imaging Data Exchange (ABIDE) I dataset for training and testing the LSTM models. Under a cross-validation framework, we achieved classification accuracy of 68.5%, which is 9% higher than previously reported methods that used fMRI data from the whole ABIDE cohort. Finally, we presented interpretation of the trained LSTM weights, which highlight potential functional networks and regions that are known to be implicated in ASD.</p>","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":"10541 ","pages":"362-370"},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/978-3-319-67389-9_42","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35573223","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 151
Feature Learning and Fusion of Multimodality Neuroimaging and Genetic Data for Multi-status Dementia Diagnosis. 特征学习和多模态神经影像学与遗传数据融合用于多状态痴呆诊断。
Pub Date : 2017-09-01 Epub Date: 2017-09-07 DOI: 10.1007/978-3-319-67389-9_16
Tao Zhou, Kim-Han Thung, Xiaofeng Zhu, Dinggang Shen

In this paper, we aim to maximally utilize multimodality neuroimaging and genetic data to predict Alzheimer's disease (AD) and its prodromal status, i.e., a multi-status dementia diagnosis problem. Multimodality neuroimaging data such as MRI and PET provide valuable insights to abnormalities, and genetic data such as Single Nucleotide Polymorphism (SNP) provide information about a patient's AD risk factors. When used in conjunction, AD diagnosis may be improved. However, these data are heterogeneous (e.g., having different data distributions), and have different number of samples (e.g., PET data is having far less number of samples than the numbers of MRI or SNPs). Thus, learning an effective model using these data is challenging. To this end, we present a novel three-stage deep feature learning and fusion framework , where the deep neural network is trained stage-wise. Each stage of the network learns feature representations for different combination of modalities, via effective training using maximum number of available samples . Specifically, in the first stage, we learn latent representations (i.e., high-level features) for each modality independently, so that the heterogeneity between modalities can be better addressed and then combined in the next stage. In the second stage, we learn the joint latent features for each pair of modality combination by using the high-level features learned from the first stage. In the third stage, we learn the diagnostic labels by fusing the learned joint latent features from the second stage. We have tested our framework on Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset for multi-status AD diagnosis, and the experimental results show that the proposed framework outperforms other methods.

在本文中,我们旨在最大限度地利用多模态神经影像学和遗传学数据来预测阿尔茨海默病(AD)及其前体状态,即多状态痴呆诊断问题。MRI和PET等多模态神经成像数据为异常提供了有价值的见解,而单核苷酸多态性(SNP)等遗传数据提供了有关患者AD风险因素的信息。当结合使用时,AD的诊断可能会得到改善。然而,这些数据是异构的(例如,具有不同的数据分布),并且具有不同的样本数量(例如,PET数据的样本数量远远少于MRI或snp的数量)。因此,使用这些数据学习一个有效的模型是具有挑战性的。为此,我们提出了一种新的三阶段深度特征学习和融合框架,其中深度神经网络是分阶段训练的。网络的每个阶段通过使用最大可用样本数量的有效训练来学习不同模态组合的特征表示。具体来说,在第一阶段,我们独立学习每个模态的潜在表征(即高级特征),以便更好地解决模态之间的异质性,然后在下一阶段进行组合。在第二阶段,我们利用从第一阶段学习到的高级特征来学习每对模态组合的联合潜在特征。在第三阶段,我们通过融合从第二阶段学习到的关节潜在特征来学习诊断标签。我们在ADNI数据集上对该框架进行了多状态AD诊断测试,实验结果表明该框架优于其他方法。
{"title":"Feature Learning and Fusion of Multimodality Neuroimaging and Genetic Data for Multi-status Dementia Diagnosis.","authors":"Tao Zhou,&nbsp;Kim-Han Thung,&nbsp;Xiaofeng Zhu,&nbsp;Dinggang Shen","doi":"10.1007/978-3-319-67389-9_16","DOIUrl":"https://doi.org/10.1007/978-3-319-67389-9_16","url":null,"abstract":"<p><p>In this paper, we aim to maximally utilize multimodality neuroimaging and genetic data to predict Alzheimer's disease (AD) and its prodromal status, i.e., a multi-status dementia diagnosis problem. Multimodality neuroimaging data such as MRI and PET provide valuable insights to abnormalities, and genetic data such as Single Nucleotide Polymorphism (SNP) provide information about a patient's AD risk factors. When used in conjunction, AD diagnosis may be improved. However, these data are heterogeneous (e.g., having different data distributions), and have different number of samples (e.g., PET data is having far less number of samples than the numbers of MRI or SNPs). Thus, learning an effective model using these data is challenging. To this end, we present a novel <b><i>three-stage deep feature learning and fusion framework</i></b> , where the deep neural network is trained stage-wise. Each stage of the network learns feature representations for different combination of modalities, via effective training using <b><i>maximum number of available samples</i></b> . Specifically, in the first stage, we learn latent representations (i.e., high-level features) for each modality independently, so that the heterogeneity between modalities can be better addressed and then combined in the next stage. In the second stage, we learn the joint latent features for each pair of modality combination by using the high-level features learned from the first stage. In the third stage, we learn the diagnostic labels by fusing the learned joint latent features from the second stage. We have tested our framework on Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset for multi-status AD diagnosis, and the experimental results show that the proposed framework outperforms other methods.</p>","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":"10541 ","pages":"132-140"},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/978-3-319-67389-9_16","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35773739","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 24
期刊
Machine learning in medical imaging. MLMI (Workshop)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1