Andrzej Liebert, Dominique Hadler, Hannes Schreiter, Chris Ehring, Luise Brock, Lorenz A. Kapsner, Jessica Eberle, Ramona Erber, Julius Emons, Frederik B. Laun, Michael Uder, Evelyn Wenkel, Sabine Ohlmeyer, Sebastian Bickelhaupt
{"title":"利用卷积神经网络虚拟生成 T2 脂肪饱和乳腺 MRI 的可行性","authors":"Andrzej Liebert, Dominique Hadler, Hannes Schreiter, Chris Ehring, Luise Brock, Lorenz A. Kapsner, Jessica Eberle, Ramona Erber, Julius Emons, Frederik B. Laun, Michael Uder, Evelyn Wenkel, Sabine Ohlmeyer, Sebastian Bickelhaupt","doi":"10.1101/2024.06.25.24309404","DOIUrl":null,"url":null,"abstract":"Background: Breast magnetic resonance imaging (MRI) protocols often include T2-weighted fat-saturated (T2w-FS) sequences, which are vital for tissue characterization but significantly increase scan time. Purpose: This study aims to evaluate whether a 2D-U-Net neural network can generate virtual T2w-FS images from routine multiparametric breast MRI sequences.\nMaterials and Methods: This IRB approved, retrospective study included n=914 breast MRI examinations performed between January 2017 and June 2020. The dataset was divided into training (n=665), validation (n=74), and test sets (n=175). The U-Net was trained on T1-weighted (T1w), diffusion-weighted imaging (DWI), and dynamic contrast-enhanced (DCE) sequences to generate virtual T2w-FS images (VirtuT2). Quantitative metrics and a qualitative multi-reader assessment by two radiologists were used to evaluate the VirtuT2 images.\nResults: VirtuT2 images demonstrated high structural similarity (SSIM=0.87) and peak signal-to-noise ratio (PSNR=24.90) compared to original T2w-FS images. High level of the frequency error norm (HFNE=0.87) indicates strong blurring presence in the VirtuT2 images, which was also confirmed in qualitative reading. Radiologists correctly identified VirtuT2 images with 92.3% and 94.2% accuracy, respectively. No significant difference in diagnostic image quality (DIQ) was noted for one reader (p=0.21), while the other reported significantly lower DIQ for VirtuT2 (p<=0.001). Moderate inter-reader agreement was observed for edema detection on T2w-FS images (ƙ=0.43), decreasing to fair on VirtuT2 images (ƙ=0.36). Conclusion: The 2D-U-Net can technically generate virtual T2w-FS images with high similarity to real T2w-FS images, though blurring remains a limitation. Further investigation of other architectures and using larger datasets are needed to improve clinical applicability.","PeriodicalId":501358,"journal":{"name":"medRxiv - Radiology and Imaging","volume":"26 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Feasibility to virtually generate T2 fat-saturated breast MRI by convolutional neural networks\",\"authors\":\"Andrzej Liebert, Dominique Hadler, Hannes Schreiter, Chris Ehring, Luise Brock, Lorenz A. Kapsner, Jessica Eberle, Ramona Erber, Julius Emons, Frederik B. Laun, Michael Uder, Evelyn Wenkel, Sabine Ohlmeyer, Sebastian Bickelhaupt\",\"doi\":\"10.1101/2024.06.25.24309404\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Background: Breast magnetic resonance imaging (MRI) protocols often include T2-weighted fat-saturated (T2w-FS) sequences, which are vital for tissue characterization but significantly increase scan time. Purpose: This study aims to evaluate whether a 2D-U-Net neural network can generate virtual T2w-FS images from routine multiparametric breast MRI sequences.\\nMaterials and Methods: This IRB approved, retrospective study included n=914 breast MRI examinations performed between January 2017 and June 2020. The dataset was divided into training (n=665), validation (n=74), and test sets (n=175). The U-Net was trained on T1-weighted (T1w), diffusion-weighted imaging (DWI), and dynamic contrast-enhanced (DCE) sequences to generate virtual T2w-FS images (VirtuT2). Quantitative metrics and a qualitative multi-reader assessment by two radiologists were used to evaluate the VirtuT2 images.\\nResults: VirtuT2 images demonstrated high structural similarity (SSIM=0.87) and peak signal-to-noise ratio (PSNR=24.90) compared to original T2w-FS images. High level of the frequency error norm (HFNE=0.87) indicates strong blurring presence in the VirtuT2 images, which was also confirmed in qualitative reading. Radiologists correctly identified VirtuT2 images with 92.3% and 94.2% accuracy, respectively. No significant difference in diagnostic image quality (DIQ) was noted for one reader (p=0.21), while the other reported significantly lower DIQ for VirtuT2 (p<=0.001). Moderate inter-reader agreement was observed for edema detection on T2w-FS images (ƙ=0.43), decreasing to fair on VirtuT2 images (ƙ=0.36). Conclusion: The 2D-U-Net can technically generate virtual T2w-FS images with high similarity to real T2w-FS images, though blurring remains a limitation. Further investigation of other architectures and using larger datasets are needed to improve clinical applicability.\",\"PeriodicalId\":501358,\"journal\":{\"name\":\"medRxiv - Radiology and Imaging\",\"volume\":\"26 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-06-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"medRxiv - Radiology and Imaging\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1101/2024.06.25.24309404\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"medRxiv - Radiology and Imaging","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1101/2024.06.25.24309404","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Feasibility to virtually generate T2 fat-saturated breast MRI by convolutional neural networks
Background: Breast magnetic resonance imaging (MRI) protocols often include T2-weighted fat-saturated (T2w-FS) sequences, which are vital for tissue characterization but significantly increase scan time. Purpose: This study aims to evaluate whether a 2D-U-Net neural network can generate virtual T2w-FS images from routine multiparametric breast MRI sequences.
Materials and Methods: This IRB approved, retrospective study included n=914 breast MRI examinations performed between January 2017 and June 2020. The dataset was divided into training (n=665), validation (n=74), and test sets (n=175). The U-Net was trained on T1-weighted (T1w), diffusion-weighted imaging (DWI), and dynamic contrast-enhanced (DCE) sequences to generate virtual T2w-FS images (VirtuT2). Quantitative metrics and a qualitative multi-reader assessment by two radiologists were used to evaluate the VirtuT2 images.
Results: VirtuT2 images demonstrated high structural similarity (SSIM=0.87) and peak signal-to-noise ratio (PSNR=24.90) compared to original T2w-FS images. High level of the frequency error norm (HFNE=0.87) indicates strong blurring presence in the VirtuT2 images, which was also confirmed in qualitative reading. Radiologists correctly identified VirtuT2 images with 92.3% and 94.2% accuracy, respectively. No significant difference in diagnostic image quality (DIQ) was noted for one reader (p=0.21), while the other reported significantly lower DIQ for VirtuT2 (p<=0.001). Moderate inter-reader agreement was observed for edema detection on T2w-FS images (ƙ=0.43), decreasing to fair on VirtuT2 images (ƙ=0.36). Conclusion: The 2D-U-Net can technically generate virtual T2w-FS images with high similarity to real T2w-FS images, though blurring remains a limitation. Further investigation of other architectures and using larger datasets are needed to improve clinical applicability.