Pub Date : 2024-05-02eCollection Date: 2024-01-01DOI: 10.1155/2024/8972980
Md Biddut Hossain, Rupali Kiran Shinde, Shariar Md Imtiaz, F M Fahmid Hossain, Seok-Hee Jeon, Ki-Chul Kwon, Nam Kim
We present a deep learning-based method that corrects motion artifacts and thus accelerates data acquisition and reconstruction of magnetic resonance images. The novel model, the Motion Artifact Correction by Swin Network (MACS-Net), uses a Swin transformer layer as the fundamental block and the Unet architecture as the neural network backbone. We employ a hierarchical transformer with shifted windows to extract multiscale contextual features during encoding. A new dual upsampling technique is employed to enhance the spatial resolutions of feature maps in the Swin transformer-based decoder layer. A raw magnetic resonance imaging dataset is used for network training and testing; the data contain various motion artifacts with ground truth images of the same subjects. The results were compared to six state-of-the-art MRI image motion correction methods using two types of motions. When motions were brief (within 5 s), the method reduced the average normalized root mean square error (NRMSE) from 45.25% to 17.51%, increased the mean structural similarity index measure (SSIM) from 79.43% to 91.72%, and increased the peak signal-to-noise ratio (PSNR) from 18.24 to 26.57 dB. Similarly, when motions were extended from 5 to 10 s, our approach decreased the average NRMSE from 60.30% to 21.04%, improved the mean SSIM from 33.86% to 90.33%, and increased the PSNR from 15.64 to 24.99 dB. The anatomical structures of the corrected images and the motion-free brain data were similar.
{"title":"Swin Transformer and the Unet Architecture to Correct Motion Artifacts in Magnetic Resonance Image Reconstruction.","authors":"Md Biddut Hossain, Rupali Kiran Shinde, Shariar Md Imtiaz, F M Fahmid Hossain, Seok-Hee Jeon, Ki-Chul Kwon, Nam Kim","doi":"10.1155/2024/8972980","DOIUrl":"10.1155/2024/8972980","url":null,"abstract":"<p><p>We present a deep learning-based method that corrects motion artifacts and thus accelerates data acquisition and reconstruction of magnetic resonance images. The novel model, the Motion Artifact Correction by Swin Network (MACS-Net), uses a Swin transformer layer as the fundamental block and the Unet architecture as the neural network backbone. We employ a hierarchical transformer with shifted windows to extract multiscale contextual features during encoding. A new dual upsampling technique is employed to enhance the spatial resolutions of feature maps in the Swin transformer-based decoder layer. A raw magnetic resonance imaging dataset is used for network training and testing; the data contain various motion artifacts with ground truth images of the same subjects. The results were compared to six state-of-the-art MRI image motion correction methods using two types of motions. When motions were brief (within 5 s), the method reduced the average normalized root mean square error (NRMSE) from 45.25% to 17.51%, increased the mean structural similarity index measure (SSIM) from 79.43% to 91.72%, and increased the peak signal-to-noise ratio (PSNR) from 18.24 to 26.57 dB. Similarly, when motions were extended from 5 to 10 s, our approach decreased the average NRMSE from 60.30% to 21.04%, improved the mean SSIM from 33.86% to 90.33%, and increased the PSNR from 15.64 to 24.99 dB. The anatomical structures of the corrected images and the motion-free brain data were similar.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2024 ","pages":"8972980"},"PeriodicalIF":7.6,"publicationDate":"2024-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11081754/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140899883","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-29eCollection Date: 2024-01-01DOI: 10.1155/2024/6347920
N I Md Ashafuddula, Rafiqul Islam
Brain tumors are critical neurological ailments caused by uncontrolled cell growth in the brain or skull, often leading to death. An increasing patient longevity rate requires prompt detection; however, the complexities of brain tissue make early diagnosis challenging. Hence, automated tools are necessary to aid healthcare professionals. This study is particularly aimed at improving the efficacy of computerized brain tumor detection in a clinical setting through a deep learning model. Hence, a novel thresholding-based MRI image segmentation approach with a transfer learning model based on contour (ContourTL-Net) is suggested to facilitate the clinical detection of brain malignancies at an initial phase. The model utilizes contour-based analysis, which is critical for object detection, precise segmentation, and capturing subtle variations in tumor morphology. The model employs a VGG-16 architecture priorly trained on the "ImageNet" collection for feature extraction and categorization. The model is designed to utilize its ten nontrainable and three trainable convolutional layers and three dropout layers. The proposed ContourTL-Net model is evaluated on two benchmark datasets in four ways, among which an unseen case is considered as the clinical aspect. Validating a deep learning model on unseen data is crucial to determine the model's generalization capability, domain adaptation, robustness, and real-world applicability. Here, the presented model's outcomes demonstrate a highly accurate classification of the unseen data, achieving a perfect sensitivity and negative predictive value (NPV) of 100%, 98.60% specificity, 99.12% precision, 99.56% F1-score, and 99.46% accuracy. Additionally, the outcomes of the suggested model are compared with state-of-the-art methodologies to further enhance its effectiveness. The proposed solution outperforms the existing solutions in both seen and unseen data, with the potential to significantly improve brain tumor detection efficiency and accuracy, leading to earlier diagnoses and improved patient outcomes.
{"title":"ContourTL-Net: Contour-Based Transfer Learning Algorithm for Early-Stage Brain Tumor Detection.","authors":"N I Md Ashafuddula, Rafiqul Islam","doi":"10.1155/2024/6347920","DOIUrl":"10.1155/2024/6347920","url":null,"abstract":"<p><p>Brain tumors are critical neurological ailments caused by uncontrolled cell growth in the brain or skull, often leading to death. An increasing patient longevity rate requires prompt detection; however, the complexities of brain tissue make early diagnosis challenging. Hence, automated tools are necessary to aid healthcare professionals. This study is particularly aimed at improving the efficacy of computerized brain tumor detection in a clinical setting through a deep learning model. Hence, a novel thresholding-based MRI image segmentation approach with a transfer learning model based on contour (ContourTL-Net) is suggested to facilitate the clinical detection of brain malignancies at an initial phase. The model utilizes contour-based analysis, which is critical for object detection, precise segmentation, and capturing subtle variations in tumor morphology. The model employs a VGG-16 architecture priorly trained on the \"ImageNet\" collection for feature extraction and categorization. The model is designed to utilize its ten nontrainable and three trainable convolutional layers and three dropout layers. The proposed ContourTL-Net model is evaluated on two benchmark datasets in four ways, among which an unseen case is considered as the clinical aspect. Validating a deep learning model on unseen data is crucial to determine the model's generalization capability, domain adaptation, robustness, and real-world applicability. Here, the presented model's outcomes demonstrate a highly accurate classification of the unseen data, achieving a perfect sensitivity and negative predictive value (NPV) of 100%, 98.60% specificity, 99.12% precision, 99.56% <i>F</i>1-score, and 99.46% accuracy. Additionally, the outcomes of the suggested model are compared with state-of-the-art methodologies to further enhance its effectiveness. The proposed solution outperforms the existing solutions in both seen and unseen data, with the potential to significantly improve brain tumor detection efficiency and accuracy, leading to earlier diagnoses and improved patient outcomes.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2024 ","pages":"6347920"},"PeriodicalIF":7.6,"publicationDate":"2024-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11074715/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140877608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A challenge in accurately identifying and classifying left ventricular hypertrophy (LVH) is distinguishing it from hypertrophic cardiomyopathy (HCM) and Fabry disease. The reliance on imaging techniques often requires the expertise of multiple specialists, including cardiologists, radiologists, and geneticists. This variability in the interpretation and classification of LVH leads to inconsistent diagnoses. LVH, HCM, and Fabry cardiomyopathy can be differentiated using T1 mapping on cardiac magnetic resonance imaging (MRI). However, differentiation between HCM and Fabry cardiomyopathy using echocardiography or MRI cine images is challenging for cardiologists. Our proposed system named the MRI short-axis view left ventricular hypertrophy classifier (MSLVHC) is a high-accuracy standardized imaging classification model developed using AI and trained on MRI short-axis (SAX) view cine images to distinguish between HCM and Fabry disease. The model achieved impressive performance, with an F1-score of 0.846, an accuracy of 0.909, and an AUC of 0.914 when tested on the Taipei Veterans General Hospital (TVGH) dataset. Additionally, a single-blinding study and external testing using data from the Taichung Veterans General Hospital (TCVGH) demonstrated the reliability and effectiveness of the model, achieving an F1-score of 0.727, an accuracy of 0.806, and an AUC of 0.918, demonstrating the model's reliability and usefulness. This AI model holds promise as a valuable tool for assisting specialists in diagnosing LVH diseases.
{"title":"A Deep Learning Approach to Classify Fabry Cardiomyopathy from Hypertrophic Cardiomyopathy Using Cine Imaging on Cardiac Magnetic Resonance.","authors":"Wei-Wen Chen, Ling Kuo, Yi-Xun Lin, Wen-Chung Yu, Chien-Chao Tseng, Yenn-Jiang Lin, Ching-Chun Huang, Shih-Lin Chang, Jacky Chung-Hao Wu, Chun-Ku Chen, Ching-Yao Weng, Siwa Chan, Wei-Wen Lin, Yu-Cheng Hsieh, Ming-Chih Lin, Yun-Ching Fu, Tsung Chen, Shih-Ann Chen, Henry Horng-Shing Lu","doi":"10.1155/2024/6114826","DOIUrl":"https://doi.org/10.1155/2024/6114826","url":null,"abstract":"<p><p>A challenge in accurately identifying and classifying left ventricular hypertrophy (LVH) is distinguishing it from hypertrophic cardiomyopathy (HCM) and Fabry disease. The reliance on imaging techniques often requires the expertise of multiple specialists, including cardiologists, radiologists, and geneticists. This variability in the interpretation and classification of LVH leads to inconsistent diagnoses. LVH, HCM, and Fabry cardiomyopathy can be differentiated using T1 mapping on cardiac magnetic resonance imaging (MRI). However, differentiation between HCM and Fabry cardiomyopathy using echocardiography or MRI cine images is challenging for cardiologists. Our proposed system named the MRI short-axis view left ventricular hypertrophy classifier (MSLVHC) is a high-accuracy standardized imaging classification model developed using AI and trained on MRI short-axis (SAX) view cine images to distinguish between HCM and Fabry disease. The model achieved impressive performance, with an <i>F</i>1-score of 0.846, an accuracy of 0.909, and an AUC of 0.914 when tested on the Taipei Veterans General Hospital (TVGH) dataset. Additionally, a single-blinding study and external testing using data from the Taichung Veterans General Hospital (TCVGH) demonstrated the reliability and effectiveness of the model, achieving an <i>F</i>1-score of 0.727, an accuracy of 0.806, and an AUC of 0.918, demonstrating the model's reliability and usefulness. This AI model holds promise as a valuable tool for assisting specialists in diagnosing LVH diseases.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2024 ","pages":"6114826"},"PeriodicalIF":7.6,"publicationDate":"2024-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11068448/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140867764","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Purpose: This study is aimed at evaluating the efficacy of the gradient-spin echo- (GraSE-) based short tau inversion recovery (STIR) sequence (GraSE-STIR) in cardiovascular magnetic resonance (CMR) imaging compared to the conventional turbo spin echo- (TSE-) based STIR sequence, specifically focusing on image quality, specific absorption rate (SAR), and image acquisition time.
Methods: In a prospective study, we examined forty-four normal volunteers and seventeen patients referred for CMR imaging using conventional STIR and GraSE-STIR techniques. Signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), image quality, T2 signal intensity (SI) ratio, SAR, and image acquisition time were compared between both sequences.
Results: GraSE-STIR showed significant improvements in image quality (4.15 ± 0.8 vs. 3.34 ± 0.9, p = 0.024) and cardiac motion artifact reduction (7 vs. 18 out of 53, p = 0.038) compared to conventional STIR. Furthermore, the acquisition time (27.17 ± 3.53 vs. 36.9 ± 4.08 seconds, p = 0.041) and the local torso SAR (<13% vs. <17%, p = 0.047) were significantly lower for GraSE-STIR compared to conventional STIR in short-axis plan. However, no significant differences were shown in T2 SI ratio (p = 0.141), SNR (p = 0.093), CNR (p = 0.068), and SAR (p = 0.071) between these two sequences.
Conclusions: GraSE-STIR offers notable advantages over conventional STIR sequence, with improved image quality, reduced motion artifacts, and shorter acquisition times. These findings highlight the potential of GraSE-STIR as a valuable technique for routine clinical CMR imaging.
{"title":"Enhanced Myocardial Tissue Visualization: A Comparative Cardiovascular Magnetic Resonance Study of Gradient-Spin Echo-STIR and Conventional STIR Imaging.","authors":"Sadegh Dehghani, Shapoor Shirani, Elahe Jazayeri Gharebagh","doi":"10.1155/2024/8456669","DOIUrl":"https://doi.org/10.1155/2024/8456669","url":null,"abstract":"<p><strong>Purpose: </strong>This study is aimed at evaluating the efficacy of the gradient-spin echo- (GraSE-) based short tau inversion recovery (STIR) sequence (GraSE-STIR) in cardiovascular magnetic resonance (CMR) imaging compared to the conventional turbo spin echo- (TSE-) based STIR sequence, specifically focusing on image quality, specific absorption rate (SAR), and image acquisition time.</p><p><strong>Methods: </strong>In a prospective study, we examined forty-four normal volunteers and seventeen patients referred for CMR imaging using conventional STIR and GraSE-STIR techniques. Signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), image quality, <i>T</i><sub>2</sub> signal intensity (SI) ratio, SAR, and image acquisition time were compared between both sequences.</p><p><strong>Results: </strong>GraSE-STIR showed significant improvements in image quality (4.15 ± 0.8 vs. 3.34 ± 0.9, <i>p</i> = 0.024) and cardiac motion artifact reduction (7 vs. 18 out of 53, <i>p</i> = 0.038) compared to conventional STIR. Furthermore, the acquisition time (27.17 ± 3.53 vs. 36.9 ± 4.08 seconds, <i>p</i> = 0.041) and the local torso SAR (<13% vs. <17%, <i>p</i> = 0.047) were significantly lower for GraSE-STIR compared to conventional STIR in short-axis plan. However, no significant differences were shown in <i>T</i><sub>2</sub> SI ratio (<i>p</i> = 0.141), SNR (<i>p</i> = 0.093), CNR (<i>p</i> = 0.068), and SAR (<i>p</i> = 0.071) between these two sequences.</p><p><strong>Conclusions: </strong>GraSE-STIR offers notable advantages over conventional STIR sequence, with improved image quality, reduced motion artifacts, and shorter acquisition times. These findings highlight the potential of GraSE-STIR as a valuable technique for routine clinical CMR imaging.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2024 ","pages":"8456669"},"PeriodicalIF":7.6,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11001468/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140871905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-19eCollection Date: 2024-01-01DOI: 10.1155/2024/2741986
Yao Zheng, Jingliang Zhang, Dong Huang, Xiaoshuo Hao, Weijun Qin, Yang Liu
Background: MRI is an important tool for accurate detection and targeted biopsy of prostate lesions. However, the imaging appearances of some prostate cancers are similar to those of the surrounding normal tissue on MRI, which are referred to as MRI-invisible prostate cancers (MIPCas). The detection of MIPCas remains challenging and requires extensive systematic biopsy for identification. In this study, we developed a weakly supervised UNet (WSUNet) to detect MIPCas.
Methods: The study included 777 patients (training set: 600; testing set: 177), all of them underwent comprehensive prostate biopsies using an MRI-ultrasound fusion system. MIPCas were identified in MRI based on the Gleason grade (≥7) from known systematic biopsy results.
Results: The WSUNet model underwent validation through systematic biopsy in the testing set with an AUC of 0.764 (95% CI: 0.728-0.798). Furthermore, WSUNet exhibited a statistically significant precision improvement of 91.3% (p < 0.01) over conventional systematic biopsy methods in the testing set. This improvement resulted in a substantial 47.6% (p < 0.01) decrease in unnecessary biopsy needles, while maintaining the same number of positively identified cores as in the original systematic biopsy.
Conclusions: In conclusion, the proposed WSUNet could effectively detect MIPCas, thereby reducing unnecessary biopsies.
{"title":"Detecting MRI-Invisible Prostate Cancers Using a Weakly Supervised Deep Learning Model.","authors":"Yao Zheng, Jingliang Zhang, Dong Huang, Xiaoshuo Hao, Weijun Qin, Yang Liu","doi":"10.1155/2024/2741986","DOIUrl":"10.1155/2024/2741986","url":null,"abstract":"<p><strong>Background: </strong>MRI is an important tool for accurate detection and targeted biopsy of prostate lesions. However, the imaging appearances of some prostate cancers are similar to those of the surrounding normal tissue on MRI, which are referred to as MRI-invisible prostate cancers (MIPCas). The detection of MIPCas remains challenging and requires extensive systematic biopsy for identification. In this study, we developed a weakly supervised UNet (WSUNet) to detect MIPCas.</p><p><strong>Methods: </strong>The study included 777 patients (training set: 600; testing set: 177), all of them underwent comprehensive prostate biopsies using an MRI-ultrasound fusion system. MIPCas were identified in MRI based on the Gleason grade (≥7) from known systematic biopsy results.</p><p><strong>Results: </strong>The WSUNet model underwent validation through systematic biopsy in the testing set with an AUC of 0.764 (95% CI: 0.728-0.798). Furthermore, WSUNet exhibited a statistically significant precision improvement of 91.3% (<i>p</i> < 0.01) over conventional systematic biopsy methods in the testing set. This improvement resulted in a substantial 47.6% (<i>p</i> < 0.01) decrease in unnecessary biopsy needles, while maintaining the same number of positively identified cores as in the original systematic biopsy.</p><p><strong>Conclusions: </strong>In conclusion, the proposed WSUNet could effectively detect MIPCas, thereby reducing unnecessary biopsies.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2024 ","pages":"2741986"},"PeriodicalIF":7.6,"publicationDate":"2024-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10965281/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140294947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-08eCollection Date: 2024-01-01DOI: 10.1155/2024/7001343
Mohammad A Rawashdeh, Sara Almazrouei, Maha Zaitoun, Praveen Kumar, Charbel Saade
Background: Artificial intelligence (AI) applications are rapidly advancing in the field of medical imaging. This study is aimed at investigating the perception and knowledge of radiographers towards artificial intelligence.
Methods: An online survey employing Google Forms consisting of 20 questions regarding the radiographers' perception of AI. The questionnaire was divided into two parts. The first part consisted of demographic information as well as whether the participants think AI should be part of medical training, their previous knowledge of the technologies used in AI, and whether they prefer to receive training on AI. The second part of the questionnaire consisted of two fields. The first one consisted of 16 questions regarding radiographers' perception of AI applications in radiology. Descriptive analysis and logistic regression analysis were used to evaluate the effect of gender on the items of the questionnaire.
Results: Familiarity with AI was low, with only 52 out of 100 respondents (52%) reporting good familiarity with AI. Many participants considered AI useful in the medical field (74%). The findings of the study demonstrate that nearly most of the participants (98%) believed that AI should be integrated into university education, with 87% of the respondents preferring to receive training on AI, with some already having prior knowledge of AI used in technologies. The logistic regression analysis indicated a significant association between male gender and experience within the range of 23-27 years with the degree of familiarity with AI technology, exhibiting respective odds ratios of 1.89 (COR = 1.89) and 1.87 (COR = 1.87).
Conclusions: This study suggests that medical practices have a favorable attitude towards AI in the radiology field. Most participants surveyed believed that AI should be part of radiography education. AI training programs for undergraduate and postgraduate radiographers may be necessary to prepare them for AI tools in radiology development.
{"title":"Empowering Radiographers: A Call for Integrated AI Training in University Curricula.","authors":"Mohammad A Rawashdeh, Sara Almazrouei, Maha Zaitoun, Praveen Kumar, Charbel Saade","doi":"10.1155/2024/7001343","DOIUrl":"10.1155/2024/7001343","url":null,"abstract":"<p><strong>Background: </strong>Artificial intelligence (AI) applications are rapidly advancing in the field of medical imaging. This study is aimed at investigating the perception and knowledge of radiographers towards artificial intelligence.</p><p><strong>Methods: </strong>An online survey employing Google Forms consisting of 20 questions regarding the radiographers' perception of AI. The questionnaire was divided into two parts. The first part consisted of demographic information as well as whether the participants think AI should be part of medical training, their previous knowledge of the technologies used in AI, and whether they prefer to receive training on AI. The second part of the questionnaire consisted of two fields. The first one consisted of 16 questions regarding radiographers' perception of AI applications in radiology. Descriptive analysis and logistic regression analysis were used to evaluate the effect of gender on the items of the questionnaire.</p><p><strong>Results: </strong>Familiarity with AI was low, with only 52 out of 100 respondents (52%) reporting good familiarity with AI. Many participants considered AI useful in the medical field (74%). The findings of the study demonstrate that nearly most of the participants (98%) believed that AI should be integrated into university education, with 87% of the respondents preferring to receive training on AI, with some already having prior knowledge of AI used in technologies. The logistic regression analysis indicated a significant association between male gender and experience within the range of 23-27 years with the degree of familiarity with AI technology, exhibiting respective odds ratios of 1.89 (COR = 1.89) and 1.87 (COR = 1.87).</p><p><strong>Conclusions: </strong>This study suggests that medical practices have a favorable attitude towards AI in the radiology field. Most participants surveyed believed that AI should be part of radiography education. AI training programs for undergraduate and postgraduate radiographers may be necessary to prepare them for AI tools in radiology development.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2024 ","pages":"7001343"},"PeriodicalIF":7.6,"publicationDate":"2024-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10942819/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140144318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-28eCollection Date: 2024-01-01DOI: 10.1155/2024/8862387
Kwangsung Oh, Piero R Bianco
Superresolution, structured illumination microscopy (SIM) is an ideal modality for imaging live cells due to its relatively high speed and low photon-induced damage to the cells. The rate-limiting step in observing a superresolution image in SIM is often the reconstruction speed of the algorithm used to form a single image from as many as nine raw images. Reconstruction algorithms impose a significant computing burden due to an intricate workflow and a large number of often complex calculations to produce the final image. Further adding to the computing burden is that the code, even within the MATLAB environment, can be inefficiently written by microscopists who are noncomputer science researchers. In addition, they do not take into consideration the processing power of the graphics processing unit (GPU) of the computer. To address these issues, we present simple but efficient approaches to first revise MATLAB code, followed by conversion to GPU-optimized code. When combined with cost-effective, high-performance GPU-enabled computers, a 4- to 500-fold improvement in algorithm execution speed is observed as shown for the image denoising Hessian-SIM algorithm. Importantly, the improved algorithm produces images identical in quality to the original.
{"title":"Facile Conversion and Optimization of Structured Illumination Image Reconstruction Code into the GPU Environment.","authors":"Kwangsung Oh, Piero R Bianco","doi":"10.1155/2024/8862387","DOIUrl":"10.1155/2024/8862387","url":null,"abstract":"<p><p>Superresolution, structured illumination microscopy (SIM) is an ideal modality for imaging live cells due to its relatively high speed and low photon-induced damage to the cells. The rate-limiting step in observing a superresolution image in SIM is often the reconstruction speed of the algorithm used to form a single image from as many as nine raw images. Reconstruction algorithms impose a significant computing burden due to an intricate workflow and a large number of often complex calculations to produce the final image. Further adding to the computing burden is that the code, even within the MATLAB environment, can be inefficiently written by microscopists who are noncomputer science researchers. In addition, they do not take into consideration the processing power of the graphics processing unit (GPU) of the computer. To address these issues, we present simple but efficient approaches to first revise MATLAB code, followed by conversion to GPU-optimized code. When combined with cost-effective, high-performance GPU-enabled computers, a 4- to 500-fold improvement in algorithm execution speed is observed as shown for the image denoising Hessian-SIM algorithm. Importantly, the improved algorithm produces images identical in quality to the original.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2024 ","pages":"8862387"},"PeriodicalIF":7.6,"publicationDate":"2024-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10917484/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140050652","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-05eCollection Date: 2024-01-01DOI: 10.1155/2024/4102461
Qian Zheng, Kefu Guo, Yinghui Meng, Jiaofen Nan, Lin Xu
Background: The deterministic fiber tracking method has the advantage of high computational efficiency and good repeatability, making it suitable for the noninvasive estimation of brain structural connectivity in clinical fields. To address the issue of the current classical deterministic method tending to deviate in the tracking direction in the region of crossing fiber region, in this paper, we propose an adaptive correction-based deterministic white matter fiber tracking method, named FTACTD.
Methods: The proposed FTACTD method can accurately track white matter fibers by adaptively adjusting the deflection direction strategy based on the tensor matrix and the input fiber direction of adjacent voxels. The degree of correction direction changes adaptively according to the shape of the diffusion tensor, mimicking the actual tracking deflection angle and direction. Furthermore, both forward and reverse tracking techniques are employed to track the entire fiber. The effectiveness of the proposed method is validated and quantified using both simulated and real brain datasets. Various indicators such as invalid bundles (IB), valid bundles (VB), invalid connections (IC), no connections (NC), and valid connections (VC) are utilized to assess the performance of the proposed method on simulated data and real diffusion-weighted imaging (DWI) data.
Results: The experimental results of the simulated data show that the FTACTD method tracks outperform existing methods, achieving the highest number of VB with a total of 13 bundles. Additionally, it identifies the least number of incorrect fiber bundles, with only 32 bundles identified as wrong. Compared to the FACT method, the FTACTD method reduces the number of NC by 36.38%. In terms of VC, the FTACTD method surpasses even the best performing SD_Stream method among deterministic methods by 1.64%. Extensive in vivo experiments demonstrate the superiority of the proposed method in terms of tracking more accurate and complete fiber paths, resulting in improved continuity.
Conclusion: The FTACTD method proposed in this study indicates superior tracking results and provides a methodological basis for the investigating, diagnosis, and treatment of brain disorders associated with white matter fiber deficits and abnormalities.
{"title":"White Matter Fiber Tracking Method with Adaptive Correction of Tracking Direction.","authors":"Qian Zheng, Kefu Guo, Yinghui Meng, Jiaofen Nan, Lin Xu","doi":"10.1155/2024/4102461","DOIUrl":"10.1155/2024/4102461","url":null,"abstract":"<p><strong>Background: </strong>The deterministic fiber tracking method has the advantage of high computational efficiency and good repeatability, making it suitable for the noninvasive estimation of brain structural connectivity in clinical fields. To address the issue of the current classical deterministic method tending to deviate in the tracking direction in the region of crossing fiber region, in this paper, we propose an adaptive correction-based deterministic white matter fiber tracking method, named FTACTD.</p><p><strong>Methods: </strong>The proposed FTACTD method can accurately track white matter fibers by adaptively adjusting the deflection direction strategy based on the tensor matrix and the input fiber direction of adjacent voxels. The degree of correction direction changes adaptively according to the shape of the diffusion tensor, mimicking the actual tracking deflection angle and direction. Furthermore, both forward and reverse tracking techniques are employed to track the entire fiber. The effectiveness of the proposed method is validated and quantified using both simulated and real brain datasets. Various indicators such as invalid bundles (IB), valid bundles (VB), invalid connections (IC), no connections (NC), and valid connections (VC) are utilized to assess the performance of the proposed method on simulated data and real diffusion-weighted imaging (DWI) data.</p><p><strong>Results: </strong>The experimental results of the simulated data show that the FTACTD method tracks outperform existing methods, achieving the highest number of VB with a total of 13 bundles. Additionally, it identifies the least number of incorrect fiber bundles, with only 32 bundles identified as wrong. Compared to the FACT method, the FTACTD method reduces the number of NC by 36.38%. In terms of VC, the FTACTD method surpasses even the best performing SD_Stream method among deterministic methods by 1.64%. Extensive in vivo experiments demonstrate the superiority of the proposed method in terms of tracking more accurate and complete fiber paths, resulting in improved continuity.</p><p><strong>Conclusion: </strong>The FTACTD method proposed in this study indicates superior tracking results and provides a methodological basis for the investigating, diagnosis, and treatment of brain disorders associated with white matter fiber deficits and abnormalities.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2024 ","pages":"4102461"},"PeriodicalIF":7.6,"publicationDate":"2024-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10861278/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139724434","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-03eCollection Date: 2024-01-01DOI: 10.1155/2024/3022192
Galib Muhammad Shahriar Himel, Md Masudul Islam, Kh Abdullah Al-Aff, Shams Ibne Karim, Md Kabir Uddin Sikder
Skin cancer is a significant health concern worldwide, and early and accurate diagnosis plays a crucial role in improving patient outcomes. In recent years, deep learning models have shown remarkable success in various computer vision tasks, including image classification. In this research study, we introduce an approach for skin cancer classification using vision transformer, a state-of-the-art deep learning architecture that has demonstrated exceptional performance in diverse image analysis tasks. The study utilizes the HAM10000 dataset; a publicly available dataset comprising 10,015 skin lesion images classified into two categories: benign (6705 images) and malignant (3310 images). This dataset consists of high-resolution images captured using dermatoscopes and carefully annotated by expert dermatologists. Preprocessing techniques, such as normalization and augmentation, are applied to enhance the robustness and generalization of the model. The vision transformer architecture is adapted to the skin cancer classification task. The model leverages the self-attention mechanism to capture intricate spatial dependencies and long-range dependencies within the images, enabling it to effectively learn relevant features for accurate classification. Segment Anything Model (SAM) is employed to segment the cancerous areas from the images; achieving an IOU of 96.01% and Dice coefficient of 98.14% and then various pretrained models are used for classification using vision transformer architecture. Extensive experiments and evaluations are conducted to assess the performance of our approach. The results demonstrate the superiority of the vision transformer model over traditional deep learning architectures in skin cancer classification in general with some exceptions. Upon experimenting on six different models, ViT-Google, ViT-MAE, ViT-ResNet50, ViT-VAN, ViT-BEiT, and ViT-DiT, we found out that the ML approach achieves 96.15% accuracy using Google's ViT patch-32 model with a low false negative ratio on the test dataset, showcasing its potential as an effective tool for aiding dermatologists in the diagnosis of skin cancer.
{"title":"Skin Cancer Segmentation and Classification Using Vision Transformer for Automatic Analysis in Dermatoscopy-Based Noninvasive Digital System.","authors":"Galib Muhammad Shahriar Himel, Md Masudul Islam, Kh Abdullah Al-Aff, Shams Ibne Karim, Md Kabir Uddin Sikder","doi":"10.1155/2024/3022192","DOIUrl":"https://doi.org/10.1155/2024/3022192","url":null,"abstract":"<p><p>Skin cancer is a significant health concern worldwide, and early and accurate diagnosis plays a crucial role in improving patient outcomes. In recent years, deep learning models have shown remarkable success in various computer vision tasks, including image classification. In this research study, we introduce an approach for skin cancer classification using vision transformer, a state-of-the-art deep learning architecture that has demonstrated exceptional performance in diverse image analysis tasks. The study utilizes the HAM10000 dataset; a publicly available dataset comprising 10,015 skin lesion images classified into two categories: benign (6705 images) and malignant (3310 images). This dataset consists of high-resolution images captured using dermatoscopes and carefully annotated by expert dermatologists. Preprocessing techniques, such as normalization and augmentation, are applied to enhance the robustness and generalization of the model. The vision transformer architecture is adapted to the skin cancer classification task. The model leverages the self-attention mechanism to capture intricate spatial dependencies and long-range dependencies within the images, enabling it to effectively learn relevant features for accurate classification. Segment Anything Model (SAM) is employed to segment the cancerous areas from the images; achieving an IOU of 96.01% and Dice coefficient of 98.14% and then various pretrained models are used for classification using vision transformer architecture. Extensive experiments and evaluations are conducted to assess the performance of our approach. The results demonstrate the superiority of the vision transformer model over traditional deep learning architectures in skin cancer classification in general with some exceptions. Upon experimenting on six different models, ViT-Google, ViT-MAE, ViT-ResNet50, ViT-VAN, ViT-BEiT, and ViT-DiT, we found out that the ML approach achieves 96.15% accuracy using Google's ViT patch-32 model with a low false negative ratio on the test dataset, showcasing its potential as an effective tool for aiding dermatologists in the diagnosis of skin cancer.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2024 ","pages":"3022192"},"PeriodicalIF":7.6,"publicationDate":"2024-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10858797/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139724433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-05eCollection Date: 2023-01-01DOI: 10.1155/2023/3819587
Maria K Jaakkola, Maria Rantala, Anna Jalo, Teemu Saari, Jaakko Hentilä, Jatta S Helin, Tuuli A Nissinen, Olli Eskola, Johan Rajander, Kirsi A Virtanen, Jarna C Hannukainen, Francisco López-Picón, Riku Klén
<p><p>Clustering time activity curves of PET images have been used to separate clinically relevant areas of the brain or tumours. However, PET image segmentation in multiorgan level is much less studied due to the available total-body data being limited to animal studies. Now, the new PET scanners providing the opportunity to acquire total-body PET scans also from humans are becoming more common, which opens plenty of new clinically interesting opportunities. Therefore, organ-level segmentation of PET images has important applications, yet it lacks sufficient research. In this proof of concept study, we evaluate if the previously used segmentation approaches are suitable for segmenting dynamic human total-body PET images in organ level. Our focus is on general-purpose unsupervised methods that are independent of external data and can be used for all tracers, organisms, and health conditions. Additional anatomical image modalities, such as CT or MRI, are not used, but the segmentation is done purely based on the dynamic PET images. The tested methods are commonly used building blocks of the more sophisticated methods rather than final methods as such, and our goal is to evaluate if these basic tools are suited for the arising human total-body PET image segmentation. First, we excluded methods that were computationally too demanding for the large datasets from human total-body PET scanners. These criteria filtered out most of the commonly used approaches, leaving only two clustering methods, <i>k</i>-means and Gaussian mixture model (GMM), for further analyses. We combined <i>k</i>-means with two different preprocessing approaches, namely, principal component analysis (PCA) and independent component analysis (ICA). Then, we selected a suitable number of clusters using 10 images. Finally, we tested how well the usable approaches segment the remaining PET images in organ level, highlight the best approaches together with their limitations, and discuss how further research could tackle the observed shortcomings. In this study, we utilised 40 total-body [<sup>18</sup>F] fluorodeoxyglucose PET images of rats to mimic the coming large human PET images and a few actual human total-body images to ensure that our conclusions from the rat data generalise to the human data. Our results show that ICA combined with <i>k</i>-means has weaker performance than the other two computationally usable approaches and that certain organs are easier to segment than others. While GMM performed sufficiently, it was by far the slowest one among the tested approaches, making <i>k</i>-means combined with PCA the most promising candidate for further development. However, even with the best methods, the mean Jaccard index was slightly below 0.5 for the easiest tested organ and below 0.2 for the most challenging organ. Thus, we conclude that there is a lack of accurate and computationally light general-purpose segmentation method that can analyse dynamic total-body PET images.</p
对 PET 图像的时间活动曲线进行聚类已被用于分离临床相关的大脑或肿瘤区域。然而,由于现有的全身数据仅限于动物研究,因此对多器官水平 PET 图像分割的研究要少得多。现在,新型 PET 扫描仪越来越普遍,可以获取人体全身 PET 扫描数据,这为临床带来了许多新的有趣机会。因此,PET 图像的器官级分割具有重要的应用价值,但目前还缺乏足够的研究。在这项概念验证研究中,我们评估了之前使用的分割方法是否适用于器官级动态人体全身 PET 图像的分割。我们的重点是通用无监督方法,这些方法不受外部数据的影响,可用于所有示踪剂、生物体和健康状况。我们不使用 CT 或 MRI 等其他解剖图像模式,而是纯粹根据动态 PET 图像进行分割。我们的目标是评估这些基本工具是否适合所产生的人体全身 PET 图像分割。首先,我们排除了对人体全身 PET 扫描仪的大型数据集计算要求过高的方法。这些标准过滤掉了大多数常用的方法,只留下两种聚类方法,即 k-means 和高斯混合模型 (GMM),供进一步分析。我们将 k-means 与两种不同的预处理方法相结合,即主成分分析 (PCA) 和独立成分分析 (ICA)。然后,我们使用 10 幅图像选择了合适的聚类数量。最后,我们测试了可用方法在器官水平上分割剩余 PET 图像的效果,强调了最佳方法及其局限性,并讨论了如何通过进一步研究解决观察到的不足之处。在这项研究中,我们使用了 40 幅大鼠全身 [18F] 氟脱氧葡萄糖 PET 图像来模拟即将出现的大型人体 PET 图像,并使用了一些实际的人体全身图像,以确保我们从大鼠数据中得出的结论能够推广到人体数据中。我们的结果表明,ICA 结合 k-means 的性能比其他两种可计算的方法要弱,而且某些器官比其他器官更容易分割。虽然 GMM 的性能足够好,但它是迄今为止测试方法中速度最慢的一种,因此结合 PCA 的 k-means 是最有希望进一步发展的候选方法。不过,即使使用了最好的方法,最容易测试的器官的平均 Jaccard 指数也略低于 0.5,而最具挑战性的器官的平均 Jaccard 指数则低于 0.2。因此,我们得出结论,目前还缺乏一种能分析动态全身 PET 图像的准确且计算量小的通用分割方法。
{"title":"Segmentation of Dynamic Total-Body [<sup>18</sup>F]-FDG PET Images Using Unsupervised Clustering.","authors":"Maria K Jaakkola, Maria Rantala, Anna Jalo, Teemu Saari, Jaakko Hentilä, Jatta S Helin, Tuuli A Nissinen, Olli Eskola, Johan Rajander, Kirsi A Virtanen, Jarna C Hannukainen, Francisco López-Picón, Riku Klén","doi":"10.1155/2023/3819587","DOIUrl":"10.1155/2023/3819587","url":null,"abstract":"<p><p>Clustering time activity curves of PET images have been used to separate clinically relevant areas of the brain or tumours. However, PET image segmentation in multiorgan level is much less studied due to the available total-body data being limited to animal studies. Now, the new PET scanners providing the opportunity to acquire total-body PET scans also from humans are becoming more common, which opens plenty of new clinically interesting opportunities. Therefore, organ-level segmentation of PET images has important applications, yet it lacks sufficient research. In this proof of concept study, we evaluate if the previously used segmentation approaches are suitable for segmenting dynamic human total-body PET images in organ level. Our focus is on general-purpose unsupervised methods that are independent of external data and can be used for all tracers, organisms, and health conditions. Additional anatomical image modalities, such as CT or MRI, are not used, but the segmentation is done purely based on the dynamic PET images. The tested methods are commonly used building blocks of the more sophisticated methods rather than final methods as such, and our goal is to evaluate if these basic tools are suited for the arising human total-body PET image segmentation. First, we excluded methods that were computationally too demanding for the large datasets from human total-body PET scanners. These criteria filtered out most of the commonly used approaches, leaving only two clustering methods, <i>k</i>-means and Gaussian mixture model (GMM), for further analyses. We combined <i>k</i>-means with two different preprocessing approaches, namely, principal component analysis (PCA) and independent component analysis (ICA). Then, we selected a suitable number of clusters using 10 images. Finally, we tested how well the usable approaches segment the remaining PET images in organ level, highlight the best approaches together with their limitations, and discuss how further research could tackle the observed shortcomings. In this study, we utilised 40 total-body [<sup>18</sup>F] fluorodeoxyglucose PET images of rats to mimic the coming large human PET images and a few actual human total-body images to ensure that our conclusions from the rat data generalise to the human data. Our results show that ICA combined with <i>k</i>-means has weaker performance than the other two computationally usable approaches and that certain organs are easier to segment than others. While GMM performed sufficiently, it was by far the slowest one among the tested approaches, making <i>k</i>-means combined with PCA the most promising candidate for further development. However, even with the best methods, the mean Jaccard index was slightly below 0.5 for the easiest tested organ and below 0.2 for the most challenging organ. Thus, we conclude that there is a lack of accurate and computationally light general-purpose segmentation method that can analyse dynamic total-body PET images.</p","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2023 ","pages":"3819587"},"PeriodicalIF":3.3,"publicationDate":"2023-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10715853/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138804116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}