Rafi Ayub, Qingyu Zhao, M J Meloy, Edith V Sullivan, Adolf Pfefferbaum, Ehsan Adeli, Kilian M Pohl
{"title":"使用深度生成模型绘制裁剪的扩散MRI。","authors":"Rafi Ayub, Qingyu Zhao, M J Meloy, Edith V Sullivan, Adolf Pfefferbaum, Ehsan Adeli, Kilian M Pohl","doi":"10.1007/978-3-030-59354-4_9","DOIUrl":null,"url":null,"abstract":"<p><p>Minor artifacts introduced during image acquisition are often negligible to the human eye, such as a confined field of view resulting in MRI missing the top of the head. This cropping artifact, however, can cause suboptimal processing of the MRI resulting in data omission or decreasing the power of subsequent analyses. We propose to avoid data or quality loss by restoring these missing regions of the head via variational autoencoders (VAE), a deep generative model that has been previously applied to high resolution image reconstruction. Based on diffusion weighted images (DWI) acquired by the National Consortium on Alcohol and Neurodevelopment in Adolescence (NCANDA), we evaluate the accuracy of inpainting the top of the head by common autoencoder models (U-Net, VQVAE, and VAE-GAN) and a custom model proposed herein called U-VQVAE. Our results show that U-VQVAE not only achieved the highest accuracy, but also resulted in MRI processing producing lower fractional anisotropy (FA) in the supplementary motor area than FA derived from the original MRIs. Lower FA implies that inpainting reduces noise in processing DWI and thus increase the quality of the generated results. The code is available at https://github.com/RdoubleA/DWIinpainting.</p>","PeriodicalId":92572,"journal":{"name":"PRedictive Intelligence in MEdicine. PRIME (Workshop)","volume":"12329 ","pages":"91-100"},"PeriodicalIF":0.0000,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8123091/pdf/nihms-1698575.pdf","citationCount":"3","resultStr":"{\"title\":\"Inpainting Cropped Diffusion MRI using Deep Generative Models.\",\"authors\":\"Rafi Ayub, Qingyu Zhao, M J Meloy, Edith V Sullivan, Adolf Pfefferbaum, Ehsan Adeli, Kilian M Pohl\",\"doi\":\"10.1007/978-3-030-59354-4_9\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Minor artifacts introduced during image acquisition are often negligible to the human eye, such as a confined field of view resulting in MRI missing the top of the head. This cropping artifact, however, can cause suboptimal processing of the MRI resulting in data omission or decreasing the power of subsequent analyses. We propose to avoid data or quality loss by restoring these missing regions of the head via variational autoencoders (VAE), a deep generative model that has been previously applied to high resolution image reconstruction. Based on diffusion weighted images (DWI) acquired by the National Consortium on Alcohol and Neurodevelopment in Adolescence (NCANDA), we evaluate the accuracy of inpainting the top of the head by common autoencoder models (U-Net, VQVAE, and VAE-GAN) and a custom model proposed herein called U-VQVAE. Our results show that U-VQVAE not only achieved the highest accuracy, but also resulted in MRI processing producing lower fractional anisotropy (FA) in the supplementary motor area than FA derived from the original MRIs. Lower FA implies that inpainting reduces noise in processing DWI and thus increase the quality of the generated results. The code is available at https://github.com/RdoubleA/DWIinpainting.</p>\",\"PeriodicalId\":92572,\"journal\":{\"name\":\"PRedictive Intelligence in MEdicine. PRIME (Workshop)\",\"volume\":\"12329 \",\"pages\":\"91-100\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8123091/pdf/nihms-1698575.pdf\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"PRedictive Intelligence in MEdicine. PRIME (Workshop)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1007/978-3-030-59354-4_9\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"PRedictive Intelligence in MEdicine. PRIME (Workshop)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/978-3-030-59354-4_9","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Inpainting Cropped Diffusion MRI using Deep Generative Models.
Minor artifacts introduced during image acquisition are often negligible to the human eye, such as a confined field of view resulting in MRI missing the top of the head. This cropping artifact, however, can cause suboptimal processing of the MRI resulting in data omission or decreasing the power of subsequent analyses. We propose to avoid data or quality loss by restoring these missing regions of the head via variational autoencoders (VAE), a deep generative model that has been previously applied to high resolution image reconstruction. Based on diffusion weighted images (DWI) acquired by the National Consortium on Alcohol and Neurodevelopment in Adolescence (NCANDA), we evaluate the accuracy of inpainting the top of the head by common autoencoder models (U-Net, VQVAE, and VAE-GAN) and a custom model proposed herein called U-VQVAE. Our results show that U-VQVAE not only achieved the highest accuracy, but also resulted in MRI processing producing lower fractional anisotropy (FA) in the supplementary motor area than FA derived from the original MRIs. Lower FA implies that inpainting reduces noise in processing DWI and thus increase the quality of the generated results. The code is available at https://github.com/RdoubleA/DWIinpainting.