Xu Chen, J. Xia, D. Shen, C. Lian, Li Wang, H. Deng, S. Fung, Dong Nie, Kim-Han Thung, P. Yap, J. Gateno
{"title":"一次生成对抗学习在颅颌面骨结构MRI分割中的应用","authors":"Xu Chen, J. Xia, D. Shen, C. Lian, Li Wang, H. Deng, S. Fung, Dong Nie, Kim-Han Thung, P. Yap, J. Gateno","doi":"10.1109/TMI.2019.2935409","DOIUrl":null,"url":null,"abstract":"Compared to computed tomography (CT), magnetic resonance imaging (MRI) delineation of craniomaxillofacial (CMF) bony structures can avoid harmful radiation exposure. However, bony boundaries are blurry in MRI, and structural information needs to be borrowed from CT during the training. This is challenging since paired MRI-CT data are typically scarce. In this paper, we propose to make full use of unpaired data, which are typically abundant, along with a single paired MRI-CT data to construct a one-shot generative adversarial model for automated MRI segmentation of CMF bony structures. Our model consists of a cross-modality image synthesis sub-network, which learns the mapping between CT and MRI, and an MRI segmentation sub-network. These two sub-networks are trained jointly in an end-to-end manner. Moreover, in the training phase, a neighbor-based anchoring method is proposed to reduce the ambiguity problem inherent in cross-modality synthesis, and a feature-matching-based semantic consistency constraint is proposed to encourage segmentation-oriented MRI synthesis. Experimental results demonstrate the superiority of our method both qualitatively and quantitatively in comparison with the state-of-the-art MRI segmentation methods.","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":"40 4","pages":"787-796"},"PeriodicalIF":8.9000,"publicationDate":"2020-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TMI.2019.2935409","citationCount":"26","resultStr":"{\"title\":\"One-Shot Generative Adversarial Learning for MRI Segmentation of Craniomaxillofacial Bony Structures\",\"authors\":\"Xu Chen, J. Xia, D. Shen, C. Lian, Li Wang, H. Deng, S. Fung, Dong Nie, Kim-Han Thung, P. Yap, J. Gateno\",\"doi\":\"10.1109/TMI.2019.2935409\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Compared to computed tomography (CT), magnetic resonance imaging (MRI) delineation of craniomaxillofacial (CMF) bony structures can avoid harmful radiation exposure. However, bony boundaries are blurry in MRI, and structural information needs to be borrowed from CT during the training. This is challenging since paired MRI-CT data are typically scarce. In this paper, we propose to make full use of unpaired data, which are typically abundant, along with a single paired MRI-CT data to construct a one-shot generative adversarial model for automated MRI segmentation of CMF bony structures. Our model consists of a cross-modality image synthesis sub-network, which learns the mapping between CT and MRI, and an MRI segmentation sub-network. These two sub-networks are trained jointly in an end-to-end manner. Moreover, in the training phase, a neighbor-based anchoring method is proposed to reduce the ambiguity problem inherent in cross-modality synthesis, and a feature-matching-based semantic consistency constraint is proposed to encourage segmentation-oriented MRI synthesis. Experimental results demonstrate the superiority of our method both qualitatively and quantitatively in comparison with the state-of-the-art MRI segmentation methods.\",\"PeriodicalId\":13418,\"journal\":{\"name\":\"IEEE Transactions on Medical Imaging\",\"volume\":\"40 4\",\"pages\":\"787-796\"},\"PeriodicalIF\":8.9000,\"publicationDate\":\"2020-03-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://sci-hub-pdf.com/10.1109/TMI.2019.2935409\",\"citationCount\":\"26\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Medical Imaging\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://doi.org/10.1109/TMI.2019.2935409\",\"RegionNum\":1,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Medical Imaging","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1109/TMI.2019.2935409","RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
One-Shot Generative Adversarial Learning for MRI Segmentation of Craniomaxillofacial Bony Structures
Compared to computed tomography (CT), magnetic resonance imaging (MRI) delineation of craniomaxillofacial (CMF) bony structures can avoid harmful radiation exposure. However, bony boundaries are blurry in MRI, and structural information needs to be borrowed from CT during the training. This is challenging since paired MRI-CT data are typically scarce. In this paper, we propose to make full use of unpaired data, which are typically abundant, along with a single paired MRI-CT data to construct a one-shot generative adversarial model for automated MRI segmentation of CMF bony structures. Our model consists of a cross-modality image synthesis sub-network, which learns the mapping between CT and MRI, and an MRI segmentation sub-network. These two sub-networks are trained jointly in an end-to-end manner. Moreover, in the training phase, a neighbor-based anchoring method is proposed to reduce the ambiguity problem inherent in cross-modality synthesis, and a feature-matching-based semantic consistency constraint is proposed to encourage segmentation-oriented MRI synthesis. Experimental results demonstrate the superiority of our method both qualitatively and quantitatively in comparison with the state-of-the-art MRI segmentation methods.
期刊介绍:
The IEEE Transactions on Medical Imaging (T-MI) is a journal that welcomes the submission of manuscripts focusing on various aspects of medical imaging. The journal encourages the exploration of body structure, morphology, and function through different imaging techniques, including ultrasound, X-rays, magnetic resonance, radionuclides, microwaves, and optical methods. It also promotes contributions related to cell and molecular imaging, as well as all forms of microscopy.
T-MI publishes original research papers that cover a wide range of topics, including but not limited to novel acquisition techniques, medical image processing and analysis, visualization and performance, pattern recognition, machine learning, and other related methods. The journal particularly encourages highly technical studies that offer new perspectives. By emphasizing the unification of medicine, biology, and imaging, T-MI seeks to bridge the gap between instrumentation, hardware, software, mathematics, physics, biology, and medicine by introducing new analysis methods.
While the journal welcomes strong application papers that describe novel methods, it directs papers that focus solely on important applications using medically adopted or well-established methods without significant innovation in methodology to other journals. T-MI is indexed in Pubmed® and Medline®, which are products of the United States National Library of Medicine.