{"title":"Multimodal deformable registration based on unsupervised learning","authors":"T. Ma, Z. Li, R. Liu, X. Fan, Z. Luo","doi":"10.13700/J.BH.1001-5965.2020.0449","DOIUrl":null,"url":null,"abstract":"Multimodal deformable registration is designed to solve dense spatial transformations and is used to align images of two different modalities It is a key issue in many medical image analysis applications Multimodal image registration based on traditional methods aims to solve the optimization problem of each pair of images, and usually achieves excellent registration performance, but the calculation cost is high and the running time is long The deep learning method greatly reduces the running time by learning the network used to perform registration These learning-based methods are very effective for single-modality registration However, the intensity distribution of different modal images is unknown and complex Most existing methods rely heavily on label data Faced with these challenges, this paper proposes a deep multimodal registration framework based on unsupervised learning Specifically, the framework consists of feature learning based on matching amount and deformation field learning based on maximum posterior probability, and realizes unsupervised training by means of spatial conversion function and differentiable mutual information loss function In the 3D image registration tasks of MRI T1, MRI T2 and CT, the proposed method is compared with the existing advanced multi-modal registration methods In addition, the registration performance of the proposed method is demonstrated on the latest COVID-19 CT data A large number of results show that the proposed method has a competitive advantage in registration accuracy compared with other methods, and greatly reduces the calculation time © 2021, Editorial Board of JBUAA All right reserved","PeriodicalId":39840,"journal":{"name":"北京航空航天大学学报","volume":"47 1","pages":"658-664"},"PeriodicalIF":0.0000,"publicationDate":"2021-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"北京航空航天大学学报","FirstCategoryId":"1087","ListUrlMain":"https://doi.org/10.13700/J.BH.1001-5965.2020.0449","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"Engineering","Score":null,"Total":0}
引用次数: 0
基于无监督学习的多模态可变形配准
多模态可变形配准是为了解决密集的空间变换而设计的,用于对齐两种不同模态的图像。这是许多医学图像分析应用中的一个关键问题。基于传统方法的多模态图像配准旨在解决每对图像的优化问题,通常会获得优异的配准性能,但是计算成本高且运行时间长。深度学习方法通过学习用于执行注册的网络大大减少了运行时间。这些基于学习的方法对于单模态注册非常有效。然而,不同模态图像的强度分布是未知和复杂的。现有的方法大多严重依赖于标签数据。面对这些挑战,本文提出了一种基于无监督学习的深度多模态配准框架。具体而言,该框架由基于匹配量的特征学习和基于最大后验概率的变形场学习组成,并通过空间转换函数和可微互信息损失函数实现了无监督训练。在MRI T1、MRI T2和CT的三维图像配准任务中,将所提出的方法与现有的先进多模态配准方法进行了比较。此外,在最新的新冠肺炎CT数据上演示了所提出的方法的配准性能。大量结果表明,与其他方法相比,所提出的算法在配准精度方面具有竞争优势,并大大减少了计算时间©2021,JBUAA编委会保留所有权利
本文章由计算机程序翻译,如有差异,请以英文原文为准。