Purpose: This study is aimed at evaluating the efficacy of the gradient-spin echo- (GraSE-) based short tau inversion recovery (STIR) sequence (GraSE-STIR) in cardiovascular magnetic resonance (CMR) imaging compared to the conventional turbo spin echo- (TSE-) based STIR sequence, specifically focusing on image quality, specific absorption rate (SAR), and image acquisition time.
Methods: In a prospective study, we examined forty-four normal volunteers and seventeen patients referred for CMR imaging using conventional STIR and GraSE-STIR techniques. Signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), image quality, T2 signal intensity (SI) ratio, SAR, and image acquisition time were compared between both sequences.
Results: GraSE-STIR showed significant improvements in image quality (4.15 ± 0.8 vs. 3.34 ± 0.9, p = 0.024) and cardiac motion artifact reduction (7 vs. 18 out of 53, p = 0.038) compared to conventional STIR. Furthermore, the acquisition time (27.17 ± 3.53 vs. 36.9 ± 4.08 seconds, p = 0.041) and the local torso SAR (<13% vs. <17%, p = 0.047) were significantly lower for GraSE-STIR compared to conventional STIR in short-axis plan. However, no significant differences were shown in T2 SI ratio (p = 0.141), SNR (p = 0.093), CNR (p = 0.068), and SAR (p = 0.071) between these two sequences.
Conclusions: GraSE-STIR offers notable advantages over conventional STIR sequence, with improved image quality, reduced motion artifacts, and shorter acquisition times. These findings highlight the potential of GraSE-STIR as a valuable technique for routine clinical CMR imaging.
{"title":"Enhanced Myocardial Tissue Visualization: A Comparative Cardiovascular Magnetic Resonance Study of Gradient-Spin Echo-STIR and Conventional STIR Imaging.","authors":"Sadegh Dehghani, Shapoor Shirani, Elahe Jazayeri Gharebagh","doi":"10.1155/2024/8456669","DOIUrl":"https://doi.org/10.1155/2024/8456669","url":null,"abstract":"<p><strong>Purpose: </strong>This study is aimed at evaluating the efficacy of the gradient-spin echo- (GraSE-) based short tau inversion recovery (STIR) sequence (GraSE-STIR) in cardiovascular magnetic resonance (CMR) imaging compared to the conventional turbo spin echo- (TSE-) based STIR sequence, specifically focusing on image quality, specific absorption rate (SAR), and image acquisition time.</p><p><strong>Methods: </strong>In a prospective study, we examined forty-four normal volunteers and seventeen patients referred for CMR imaging using conventional STIR and GraSE-STIR techniques. Signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), image quality, <i>T</i><sub>2</sub> signal intensity (SI) ratio, SAR, and image acquisition time were compared between both sequences.</p><p><strong>Results: </strong>GraSE-STIR showed significant improvements in image quality (4.15 ± 0.8 vs. 3.34 ± 0.9, <i>p</i> = 0.024) and cardiac motion artifact reduction (7 vs. 18 out of 53, <i>p</i> = 0.038) compared to conventional STIR. Furthermore, the acquisition time (27.17 ± 3.53 vs. 36.9 ± 4.08 seconds, <i>p</i> = 0.041) and the local torso SAR (<13% vs. <17%, <i>p</i> = 0.047) were significantly lower for GraSE-STIR compared to conventional STIR in short-axis plan. However, no significant differences were shown in <i>T</i><sub>2</sub> SI ratio (<i>p</i> = 0.141), SNR (<i>p</i> = 0.093), CNR (<i>p</i> = 0.068), and SAR (<i>p</i> = 0.071) between these two sequences.</p><p><strong>Conclusions: </strong>GraSE-STIR offers notable advantages over conventional STIR sequence, with improved image quality, reduced motion artifacts, and shorter acquisition times. These findings highlight the potential of GraSE-STIR as a valuable technique for routine clinical CMR imaging.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2024 ","pages":"8456669"},"PeriodicalIF":7.6,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11001468/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140871905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-19eCollection Date: 2024-01-01DOI: 10.1155/2024/2741986
Yao Zheng, Jingliang Zhang, Dong Huang, Xiaoshuo Hao, Weijun Qin, Yang Liu
Background: MRI is an important tool for accurate detection and targeted biopsy of prostate lesions. However, the imaging appearances of some prostate cancers are similar to those of the surrounding normal tissue on MRI, which are referred to as MRI-invisible prostate cancers (MIPCas). The detection of MIPCas remains challenging and requires extensive systematic biopsy for identification. In this study, we developed a weakly supervised UNet (WSUNet) to detect MIPCas.
Methods: The study included 777 patients (training set: 600; testing set: 177), all of them underwent comprehensive prostate biopsies using an MRI-ultrasound fusion system. MIPCas were identified in MRI based on the Gleason grade (≥7) from known systematic biopsy results.
Results: The WSUNet model underwent validation through systematic biopsy in the testing set with an AUC of 0.764 (95% CI: 0.728-0.798). Furthermore, WSUNet exhibited a statistically significant precision improvement of 91.3% (p < 0.01) over conventional systematic biopsy methods in the testing set. This improvement resulted in a substantial 47.6% (p < 0.01) decrease in unnecessary biopsy needles, while maintaining the same number of positively identified cores as in the original systematic biopsy.
Conclusions: In conclusion, the proposed WSUNet could effectively detect MIPCas, thereby reducing unnecessary biopsies.
{"title":"Detecting MRI-Invisible Prostate Cancers Using a Weakly Supervised Deep Learning Model.","authors":"Yao Zheng, Jingliang Zhang, Dong Huang, Xiaoshuo Hao, Weijun Qin, Yang Liu","doi":"10.1155/2024/2741986","DOIUrl":"10.1155/2024/2741986","url":null,"abstract":"<p><strong>Background: </strong>MRI is an important tool for accurate detection and targeted biopsy of prostate lesions. However, the imaging appearances of some prostate cancers are similar to those of the surrounding normal tissue on MRI, which are referred to as MRI-invisible prostate cancers (MIPCas). The detection of MIPCas remains challenging and requires extensive systematic biopsy for identification. In this study, we developed a weakly supervised UNet (WSUNet) to detect MIPCas.</p><p><strong>Methods: </strong>The study included 777 patients (training set: 600; testing set: 177), all of them underwent comprehensive prostate biopsies using an MRI-ultrasound fusion system. MIPCas were identified in MRI based on the Gleason grade (≥7) from known systematic biopsy results.</p><p><strong>Results: </strong>The WSUNet model underwent validation through systematic biopsy in the testing set with an AUC of 0.764 (95% CI: 0.728-0.798). Furthermore, WSUNet exhibited a statistically significant precision improvement of 91.3% (<i>p</i> < 0.01) over conventional systematic biopsy methods in the testing set. This improvement resulted in a substantial 47.6% (<i>p</i> < 0.01) decrease in unnecessary biopsy needles, while maintaining the same number of positively identified cores as in the original systematic biopsy.</p><p><strong>Conclusions: </strong>In conclusion, the proposed WSUNet could effectively detect MIPCas, thereby reducing unnecessary biopsies.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2024 ","pages":"2741986"},"PeriodicalIF":7.6,"publicationDate":"2024-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10965281/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140294947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-08eCollection Date: 2024-01-01DOI: 10.1155/2024/7001343
Mohammad A Rawashdeh, Sara Almazrouei, Maha Zaitoun, Praveen Kumar, Charbel Saade
Background: Artificial intelligence (AI) applications are rapidly advancing in the field of medical imaging. This study is aimed at investigating the perception and knowledge of radiographers towards artificial intelligence.
Methods: An online survey employing Google Forms consisting of 20 questions regarding the radiographers' perception of AI. The questionnaire was divided into two parts. The first part consisted of demographic information as well as whether the participants think AI should be part of medical training, their previous knowledge of the technologies used in AI, and whether they prefer to receive training on AI. The second part of the questionnaire consisted of two fields. The first one consisted of 16 questions regarding radiographers' perception of AI applications in radiology. Descriptive analysis and logistic regression analysis were used to evaluate the effect of gender on the items of the questionnaire.
Results: Familiarity with AI was low, with only 52 out of 100 respondents (52%) reporting good familiarity with AI. Many participants considered AI useful in the medical field (74%). The findings of the study demonstrate that nearly most of the participants (98%) believed that AI should be integrated into university education, with 87% of the respondents preferring to receive training on AI, with some already having prior knowledge of AI used in technologies. The logistic regression analysis indicated a significant association between male gender and experience within the range of 23-27 years with the degree of familiarity with AI technology, exhibiting respective odds ratios of 1.89 (COR = 1.89) and 1.87 (COR = 1.87).
Conclusions: This study suggests that medical practices have a favorable attitude towards AI in the radiology field. Most participants surveyed believed that AI should be part of radiography education. AI training programs for undergraduate and postgraduate radiographers may be necessary to prepare them for AI tools in radiology development.
{"title":"Empowering Radiographers: A Call for Integrated AI Training in University Curricula.","authors":"Mohammad A Rawashdeh, Sara Almazrouei, Maha Zaitoun, Praveen Kumar, Charbel Saade","doi":"10.1155/2024/7001343","DOIUrl":"10.1155/2024/7001343","url":null,"abstract":"<p><strong>Background: </strong>Artificial intelligence (AI) applications are rapidly advancing in the field of medical imaging. This study is aimed at investigating the perception and knowledge of radiographers towards artificial intelligence.</p><p><strong>Methods: </strong>An online survey employing Google Forms consisting of 20 questions regarding the radiographers' perception of AI. The questionnaire was divided into two parts. The first part consisted of demographic information as well as whether the participants think AI should be part of medical training, their previous knowledge of the technologies used in AI, and whether they prefer to receive training on AI. The second part of the questionnaire consisted of two fields. The first one consisted of 16 questions regarding radiographers' perception of AI applications in radiology. Descriptive analysis and logistic regression analysis were used to evaluate the effect of gender on the items of the questionnaire.</p><p><strong>Results: </strong>Familiarity with AI was low, with only 52 out of 100 respondents (52%) reporting good familiarity with AI. Many participants considered AI useful in the medical field (74%). The findings of the study demonstrate that nearly most of the participants (98%) believed that AI should be integrated into university education, with 87% of the respondents preferring to receive training on AI, with some already having prior knowledge of AI used in technologies. The logistic regression analysis indicated a significant association between male gender and experience within the range of 23-27 years with the degree of familiarity with AI technology, exhibiting respective odds ratios of 1.89 (COR = 1.89) and 1.87 (COR = 1.87).</p><p><strong>Conclusions: </strong>This study suggests that medical practices have a favorable attitude towards AI in the radiology field. Most participants surveyed believed that AI should be part of radiography education. AI training programs for undergraduate and postgraduate radiographers may be necessary to prepare them for AI tools in radiology development.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2024 ","pages":"7001343"},"PeriodicalIF":7.6,"publicationDate":"2024-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10942819/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140144318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-28eCollection Date: 2024-01-01DOI: 10.1155/2024/8862387
Kwangsung Oh, Piero R Bianco
Superresolution, structured illumination microscopy (SIM) is an ideal modality for imaging live cells due to its relatively high speed and low photon-induced damage to the cells. The rate-limiting step in observing a superresolution image in SIM is often the reconstruction speed of the algorithm used to form a single image from as many as nine raw images. Reconstruction algorithms impose a significant computing burden due to an intricate workflow and a large number of often complex calculations to produce the final image. Further adding to the computing burden is that the code, even within the MATLAB environment, can be inefficiently written by microscopists who are noncomputer science researchers. In addition, they do not take into consideration the processing power of the graphics processing unit (GPU) of the computer. To address these issues, we present simple but efficient approaches to first revise MATLAB code, followed by conversion to GPU-optimized code. When combined with cost-effective, high-performance GPU-enabled computers, a 4- to 500-fold improvement in algorithm execution speed is observed as shown for the image denoising Hessian-SIM algorithm. Importantly, the improved algorithm produces images identical in quality to the original.
{"title":"Facile Conversion and Optimization of Structured Illumination Image Reconstruction Code into the GPU Environment.","authors":"Kwangsung Oh, Piero R Bianco","doi":"10.1155/2024/8862387","DOIUrl":"10.1155/2024/8862387","url":null,"abstract":"<p><p>Superresolution, structured illumination microscopy (SIM) is an ideal modality for imaging live cells due to its relatively high speed and low photon-induced damage to the cells. The rate-limiting step in observing a superresolution image in SIM is often the reconstruction speed of the algorithm used to form a single image from as many as nine raw images. Reconstruction algorithms impose a significant computing burden due to an intricate workflow and a large number of often complex calculations to produce the final image. Further adding to the computing burden is that the code, even within the MATLAB environment, can be inefficiently written by microscopists who are noncomputer science researchers. In addition, they do not take into consideration the processing power of the graphics processing unit (GPU) of the computer. To address these issues, we present simple but efficient approaches to first revise MATLAB code, followed by conversion to GPU-optimized code. When combined with cost-effective, high-performance GPU-enabled computers, a 4- to 500-fold improvement in algorithm execution speed is observed as shown for the image denoising Hessian-SIM algorithm. Importantly, the improved algorithm produces images identical in quality to the original.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2024 ","pages":"8862387"},"PeriodicalIF":7.6,"publicationDate":"2024-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10917484/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140050652","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-05eCollection Date: 2024-01-01DOI: 10.1155/2024/4102461
Qian Zheng, Kefu Guo, Yinghui Meng, Jiaofen Nan, Lin Xu
Background: The deterministic fiber tracking method has the advantage of high computational efficiency and good repeatability, making it suitable for the noninvasive estimation of brain structural connectivity in clinical fields. To address the issue of the current classical deterministic method tending to deviate in the tracking direction in the region of crossing fiber region, in this paper, we propose an adaptive correction-based deterministic white matter fiber tracking method, named FTACTD.
Methods: The proposed FTACTD method can accurately track white matter fibers by adaptively adjusting the deflection direction strategy based on the tensor matrix and the input fiber direction of adjacent voxels. The degree of correction direction changes adaptively according to the shape of the diffusion tensor, mimicking the actual tracking deflection angle and direction. Furthermore, both forward and reverse tracking techniques are employed to track the entire fiber. The effectiveness of the proposed method is validated and quantified using both simulated and real brain datasets. Various indicators such as invalid bundles (IB), valid bundles (VB), invalid connections (IC), no connections (NC), and valid connections (VC) are utilized to assess the performance of the proposed method on simulated data and real diffusion-weighted imaging (DWI) data.
Results: The experimental results of the simulated data show that the FTACTD method tracks outperform existing methods, achieving the highest number of VB with a total of 13 bundles. Additionally, it identifies the least number of incorrect fiber bundles, with only 32 bundles identified as wrong. Compared to the FACT method, the FTACTD method reduces the number of NC by 36.38%. In terms of VC, the FTACTD method surpasses even the best performing SD_Stream method among deterministic methods by 1.64%. Extensive in vivo experiments demonstrate the superiority of the proposed method in terms of tracking more accurate and complete fiber paths, resulting in improved continuity.
Conclusion: The FTACTD method proposed in this study indicates superior tracking results and provides a methodological basis for the investigating, diagnosis, and treatment of brain disorders associated with white matter fiber deficits and abnormalities.
{"title":"White Matter Fiber Tracking Method with Adaptive Correction of Tracking Direction.","authors":"Qian Zheng, Kefu Guo, Yinghui Meng, Jiaofen Nan, Lin Xu","doi":"10.1155/2024/4102461","DOIUrl":"10.1155/2024/4102461","url":null,"abstract":"<p><strong>Background: </strong>The deterministic fiber tracking method has the advantage of high computational efficiency and good repeatability, making it suitable for the noninvasive estimation of brain structural connectivity in clinical fields. To address the issue of the current classical deterministic method tending to deviate in the tracking direction in the region of crossing fiber region, in this paper, we propose an adaptive correction-based deterministic white matter fiber tracking method, named FTACTD.</p><p><strong>Methods: </strong>The proposed FTACTD method can accurately track white matter fibers by adaptively adjusting the deflection direction strategy based on the tensor matrix and the input fiber direction of adjacent voxels. The degree of correction direction changes adaptively according to the shape of the diffusion tensor, mimicking the actual tracking deflection angle and direction. Furthermore, both forward and reverse tracking techniques are employed to track the entire fiber. The effectiveness of the proposed method is validated and quantified using both simulated and real brain datasets. Various indicators such as invalid bundles (IB), valid bundles (VB), invalid connections (IC), no connections (NC), and valid connections (VC) are utilized to assess the performance of the proposed method on simulated data and real diffusion-weighted imaging (DWI) data.</p><p><strong>Results: </strong>The experimental results of the simulated data show that the FTACTD method tracks outperform existing methods, achieving the highest number of VB with a total of 13 bundles. Additionally, it identifies the least number of incorrect fiber bundles, with only 32 bundles identified as wrong. Compared to the FACT method, the FTACTD method reduces the number of NC by 36.38%. In terms of VC, the FTACTD method surpasses even the best performing SD_Stream method among deterministic methods by 1.64%. Extensive in vivo experiments demonstrate the superiority of the proposed method in terms of tracking more accurate and complete fiber paths, resulting in improved continuity.</p><p><strong>Conclusion: </strong>The FTACTD method proposed in this study indicates superior tracking results and provides a methodological basis for the investigating, diagnosis, and treatment of brain disorders associated with white matter fiber deficits and abnormalities.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2024 ","pages":"4102461"},"PeriodicalIF":7.6,"publicationDate":"2024-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10861278/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139724434","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-03eCollection Date: 2024-01-01DOI: 10.1155/2024/3022192
Galib Muhammad Shahriar Himel, Md Masudul Islam, Kh Abdullah Al-Aff, Shams Ibne Karim, Md Kabir Uddin Sikder
Skin cancer is a significant health concern worldwide, and early and accurate diagnosis plays a crucial role in improving patient outcomes. In recent years, deep learning models have shown remarkable success in various computer vision tasks, including image classification. In this research study, we introduce an approach for skin cancer classification using vision transformer, a state-of-the-art deep learning architecture that has demonstrated exceptional performance in diverse image analysis tasks. The study utilizes the HAM10000 dataset; a publicly available dataset comprising 10,015 skin lesion images classified into two categories: benign (6705 images) and malignant (3310 images). This dataset consists of high-resolution images captured using dermatoscopes and carefully annotated by expert dermatologists. Preprocessing techniques, such as normalization and augmentation, are applied to enhance the robustness and generalization of the model. The vision transformer architecture is adapted to the skin cancer classification task. The model leverages the self-attention mechanism to capture intricate spatial dependencies and long-range dependencies within the images, enabling it to effectively learn relevant features for accurate classification. Segment Anything Model (SAM) is employed to segment the cancerous areas from the images; achieving an IOU of 96.01% and Dice coefficient of 98.14% and then various pretrained models are used for classification using vision transformer architecture. Extensive experiments and evaluations are conducted to assess the performance of our approach. The results demonstrate the superiority of the vision transformer model over traditional deep learning architectures in skin cancer classification in general with some exceptions. Upon experimenting on six different models, ViT-Google, ViT-MAE, ViT-ResNet50, ViT-VAN, ViT-BEiT, and ViT-DiT, we found out that the ML approach achieves 96.15% accuracy using Google's ViT patch-32 model with a low false negative ratio on the test dataset, showcasing its potential as an effective tool for aiding dermatologists in the diagnosis of skin cancer.
{"title":"Skin Cancer Segmentation and Classification Using Vision Transformer for Automatic Analysis in Dermatoscopy-Based Noninvasive Digital System.","authors":"Galib Muhammad Shahriar Himel, Md Masudul Islam, Kh Abdullah Al-Aff, Shams Ibne Karim, Md Kabir Uddin Sikder","doi":"10.1155/2024/3022192","DOIUrl":"https://doi.org/10.1155/2024/3022192","url":null,"abstract":"<p><p>Skin cancer is a significant health concern worldwide, and early and accurate diagnosis plays a crucial role in improving patient outcomes. In recent years, deep learning models have shown remarkable success in various computer vision tasks, including image classification. In this research study, we introduce an approach for skin cancer classification using vision transformer, a state-of-the-art deep learning architecture that has demonstrated exceptional performance in diverse image analysis tasks. The study utilizes the HAM10000 dataset; a publicly available dataset comprising 10,015 skin lesion images classified into two categories: benign (6705 images) and malignant (3310 images). This dataset consists of high-resolution images captured using dermatoscopes and carefully annotated by expert dermatologists. Preprocessing techniques, such as normalization and augmentation, are applied to enhance the robustness and generalization of the model. The vision transformer architecture is adapted to the skin cancer classification task. The model leverages the self-attention mechanism to capture intricate spatial dependencies and long-range dependencies within the images, enabling it to effectively learn relevant features for accurate classification. Segment Anything Model (SAM) is employed to segment the cancerous areas from the images; achieving an IOU of 96.01% and Dice coefficient of 98.14% and then various pretrained models are used for classification using vision transformer architecture. Extensive experiments and evaluations are conducted to assess the performance of our approach. The results demonstrate the superiority of the vision transformer model over traditional deep learning architectures in skin cancer classification in general with some exceptions. Upon experimenting on six different models, ViT-Google, ViT-MAE, ViT-ResNet50, ViT-VAN, ViT-BEiT, and ViT-DiT, we found out that the ML approach achieves 96.15% accuracy using Google's ViT patch-32 model with a low false negative ratio on the test dataset, showcasing its potential as an effective tool for aiding dermatologists in the diagnosis of skin cancer.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2024 ","pages":"3022192"},"PeriodicalIF":7.6,"publicationDate":"2024-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10858797/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139724433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-05eCollection Date: 2023-01-01DOI: 10.1155/2023/3819587
Maria K Jaakkola, Maria Rantala, Anna Jalo, Teemu Saari, Jaakko Hentilä, Jatta S Helin, Tuuli A Nissinen, Olli Eskola, Johan Rajander, Kirsi A Virtanen, Jarna C Hannukainen, Francisco López-Picón, Riku Klén
<p><p>Clustering time activity curves of PET images have been used to separate clinically relevant areas of the brain or tumours. However, PET image segmentation in multiorgan level is much less studied due to the available total-body data being limited to animal studies. Now, the new PET scanners providing the opportunity to acquire total-body PET scans also from humans are becoming more common, which opens plenty of new clinically interesting opportunities. Therefore, organ-level segmentation of PET images has important applications, yet it lacks sufficient research. In this proof of concept study, we evaluate if the previously used segmentation approaches are suitable for segmenting dynamic human total-body PET images in organ level. Our focus is on general-purpose unsupervised methods that are independent of external data and can be used for all tracers, organisms, and health conditions. Additional anatomical image modalities, such as CT or MRI, are not used, but the segmentation is done purely based on the dynamic PET images. The tested methods are commonly used building blocks of the more sophisticated methods rather than final methods as such, and our goal is to evaluate if these basic tools are suited for the arising human total-body PET image segmentation. First, we excluded methods that were computationally too demanding for the large datasets from human total-body PET scanners. These criteria filtered out most of the commonly used approaches, leaving only two clustering methods, <i>k</i>-means and Gaussian mixture model (GMM), for further analyses. We combined <i>k</i>-means with two different preprocessing approaches, namely, principal component analysis (PCA) and independent component analysis (ICA). Then, we selected a suitable number of clusters using 10 images. Finally, we tested how well the usable approaches segment the remaining PET images in organ level, highlight the best approaches together with their limitations, and discuss how further research could tackle the observed shortcomings. In this study, we utilised 40 total-body [<sup>18</sup>F] fluorodeoxyglucose PET images of rats to mimic the coming large human PET images and a few actual human total-body images to ensure that our conclusions from the rat data generalise to the human data. Our results show that ICA combined with <i>k</i>-means has weaker performance than the other two computationally usable approaches and that certain organs are easier to segment than others. While GMM performed sufficiently, it was by far the slowest one among the tested approaches, making <i>k</i>-means combined with PCA the most promising candidate for further development. However, even with the best methods, the mean Jaccard index was slightly below 0.5 for the easiest tested organ and below 0.2 for the most challenging organ. Thus, we conclude that there is a lack of accurate and computationally light general-purpose segmentation method that can analyse dynamic total-body PET images.</p
对 PET 图像的时间活动曲线进行聚类已被用于分离临床相关的大脑或肿瘤区域。然而,由于现有的全身数据仅限于动物研究,因此对多器官水平 PET 图像分割的研究要少得多。现在,新型 PET 扫描仪越来越普遍,可以获取人体全身 PET 扫描数据,这为临床带来了许多新的有趣机会。因此,PET 图像的器官级分割具有重要的应用价值,但目前还缺乏足够的研究。在这项概念验证研究中,我们评估了之前使用的分割方法是否适用于器官级动态人体全身 PET 图像的分割。我们的重点是通用无监督方法,这些方法不受外部数据的影响,可用于所有示踪剂、生物体和健康状况。我们不使用 CT 或 MRI 等其他解剖图像模式,而是纯粹根据动态 PET 图像进行分割。我们的目标是评估这些基本工具是否适合所产生的人体全身 PET 图像分割。首先,我们排除了对人体全身 PET 扫描仪的大型数据集计算要求过高的方法。这些标准过滤掉了大多数常用的方法,只留下两种聚类方法,即 k-means 和高斯混合模型 (GMM),供进一步分析。我们将 k-means 与两种不同的预处理方法相结合,即主成分分析 (PCA) 和独立成分分析 (ICA)。然后,我们使用 10 幅图像选择了合适的聚类数量。最后,我们测试了可用方法在器官水平上分割剩余 PET 图像的效果,强调了最佳方法及其局限性,并讨论了如何通过进一步研究解决观察到的不足之处。在这项研究中,我们使用了 40 幅大鼠全身 [18F] 氟脱氧葡萄糖 PET 图像来模拟即将出现的大型人体 PET 图像,并使用了一些实际的人体全身图像,以确保我们从大鼠数据中得出的结论能够推广到人体数据中。我们的结果表明,ICA 结合 k-means 的性能比其他两种可计算的方法要弱,而且某些器官比其他器官更容易分割。虽然 GMM 的性能足够好,但它是迄今为止测试方法中速度最慢的一种,因此结合 PCA 的 k-means 是最有希望进一步发展的候选方法。不过,即使使用了最好的方法,最容易测试的器官的平均 Jaccard 指数也略低于 0.5,而最具挑战性的器官的平均 Jaccard 指数则低于 0.2。因此,我们得出结论,目前还缺乏一种能分析动态全身 PET 图像的准确且计算量小的通用分割方法。
{"title":"Segmentation of Dynamic Total-Body [<sup>18</sup>F]-FDG PET Images Using Unsupervised Clustering.","authors":"Maria K Jaakkola, Maria Rantala, Anna Jalo, Teemu Saari, Jaakko Hentilä, Jatta S Helin, Tuuli A Nissinen, Olli Eskola, Johan Rajander, Kirsi A Virtanen, Jarna C Hannukainen, Francisco López-Picón, Riku Klén","doi":"10.1155/2023/3819587","DOIUrl":"10.1155/2023/3819587","url":null,"abstract":"<p><p>Clustering time activity curves of PET images have been used to separate clinically relevant areas of the brain or tumours. However, PET image segmentation in multiorgan level is much less studied due to the available total-body data being limited to animal studies. Now, the new PET scanners providing the opportunity to acquire total-body PET scans also from humans are becoming more common, which opens plenty of new clinically interesting opportunities. Therefore, organ-level segmentation of PET images has important applications, yet it lacks sufficient research. In this proof of concept study, we evaluate if the previously used segmentation approaches are suitable for segmenting dynamic human total-body PET images in organ level. Our focus is on general-purpose unsupervised methods that are independent of external data and can be used for all tracers, organisms, and health conditions. Additional anatomical image modalities, such as CT or MRI, are not used, but the segmentation is done purely based on the dynamic PET images. The tested methods are commonly used building blocks of the more sophisticated methods rather than final methods as such, and our goal is to evaluate if these basic tools are suited for the arising human total-body PET image segmentation. First, we excluded methods that were computationally too demanding for the large datasets from human total-body PET scanners. These criteria filtered out most of the commonly used approaches, leaving only two clustering methods, <i>k</i>-means and Gaussian mixture model (GMM), for further analyses. We combined <i>k</i>-means with two different preprocessing approaches, namely, principal component analysis (PCA) and independent component analysis (ICA). Then, we selected a suitable number of clusters using 10 images. Finally, we tested how well the usable approaches segment the remaining PET images in organ level, highlight the best approaches together with their limitations, and discuss how further research could tackle the observed shortcomings. In this study, we utilised 40 total-body [<sup>18</sup>F] fluorodeoxyglucose PET images of rats to mimic the coming large human PET images and a few actual human total-body images to ensure that our conclusions from the rat data generalise to the human data. Our results show that ICA combined with <i>k</i>-means has weaker performance than the other two computationally usable approaches and that certain organs are easier to segment than others. While GMM performed sufficiently, it was by far the slowest one among the tested approaches, making <i>k</i>-means combined with PCA the most promising candidate for further development. However, even with the best methods, the mean Jaccard index was slightly below 0.5 for the easiest tested organ and below 0.2 for the most challenging organ. Thus, we conclude that there is a lack of accurate and computationally light general-purpose segmentation method that can analyse dynamic total-body PET images.</p","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2023 ","pages":"3819587"},"PeriodicalIF":3.3,"publicationDate":"2023-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10715853/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138804116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Diabetic macular edema (DME) and age-related macular degeneration (AMD) are two common eye diseases. They are often undiagnosed or diagnosed late. This can result in permanent and irreversible vision loss. Therefore, early detection and treatment of these diseases can prevent vision loss, save money, and provide a better quality of life for individuals. Optical coherence tomography (OCT) imaging is widely applied to identify eye diseases, including DME and AMD. In this work, we developed automatic deep learning-based methods to detect these pathologies using SD-OCT scans. The convolutional neural network (CNN) from scratch we developed gave the best classification score with an accuracy higher than 99% on Duke dataset of OCT images.
{"title":"Automatic Detection of AMD and DME Retinal Pathologies Using Deep Learning.","authors":"Latifa Saidi, Hajer Jomaa, Haddad Zainab, Hsouna Zgolli, Sonia Mabrouk, Désiré Sidibé, Hedi Tabia, Nawres Khlifa","doi":"10.1155/2023/9966107","DOIUrl":"10.1155/2023/9966107","url":null,"abstract":"<p><p>Diabetic macular edema (DME) and age-related macular degeneration (AMD) are two common eye diseases. They are often undiagnosed or diagnosed late. This can result in permanent and irreversible vision loss. Therefore, early detection and treatment of these diseases can prevent vision loss, save money, and provide a better quality of life for individuals. Optical coherence tomography (OCT) imaging is widely applied to identify eye diseases, including DME and AMD. In this work, we developed automatic deep learning-based methods to detect these pathologies using SD-OCT scans. The convolutional neural network (CNN) from scratch we developed gave the best classification score with an accuracy higher than 99% on Duke dataset of OCT images.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2023 ","pages":"9966107"},"PeriodicalIF":7.6,"publicationDate":"2023-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10691890/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138478963","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-15eCollection Date: 2023-01-01DOI: 10.1155/2023/6304219
Eric Naab Manson, Stephen Inkoom, Abdul Nashirudeen Mumuni, Issahaku Shirazu, Adolf Kofi Awua
Background: The 3D T1W turbo field echo sequence is a standard imaging method for acquiring high-contrast images of the brain. However, the contrast-to-noise ratio (CNR) can be affected by the turbo factor, which could affect the delineation and segmentation of various structures in the brain and may consequently lead to misdiagnosis. This study is aimed at evaluating the effect of the turbo factor on image quality and volumetric measurement reproducibility in brain magnetic resonance imaging (MRI).
Methods: Brain images of five healthy volunteers with no history of neurological diseases were acquired on a 1.5 T MRI scanner with varying turbo factors of 50, 100, 150, 200, and 225. The images were processed and analyzed with FreeSurfer. The influence of the TFE factor on image quality and reproducibility of brain volume measurements was investigated. Image quality metrics assessed included the signal-to-noise ratio (SNR) of white matter (WM), CNR between gray matter/white matter (GM/WM) and gray matter/cerebrospinal fluid (GM/CSF), and Euler number (EN). Moreover, structural brain volume measurements of WM, GM, and CSF were conducted.
Results: Turbo factor 200 produced the best SNR (median = 17.01) and GM/WM CNR (median = 2.29), but turbo factor 100 offered the most reproducible SNR (IQR = 2.72) and GM/WM CNR (IQR = 0.14). Turbo factor 50 had the worst and the least reproducible SNR, whereas turbo factor 225 had the worst and the least reproducible GM/WM CNR. Turbo factor 200 again had the best GM/CSF CNR but offered the least reproducible GM/CSF CNR. Turbo factor 225 had the best performance on EN (-21), while turbo factor 200 was next to the most reproducible turbo factor on EN (11). The results showed that turbo factor 200 had the least data acquisition time, in addition to superior performance on SNR, GM/WM CNR, GM/CSF CNR, and good reproducibility characteristics on EN. Both image quality metrics and volumetric measurements did not vary significantly (p > 0.05) with the range of turbo factors used in the study by one-way ANOVA analysis.
Conclusion: Since no significant differences were observed in the performance of the turbo factors in terms of image quality and volume of brain structure, turbo factor 200 with a 74% acquisition time reduction was found to be optimal for brain MR imaging at 1.5 T.
{"title":"Assessment of the Impact of Turbo Factor on Image Quality and Tissue Volumetrics in Brain Magnetic Resonance Imaging Using the Three-Dimensional T1-Weighted (3D T1W) Sequence.","authors":"Eric Naab Manson, Stephen Inkoom, Abdul Nashirudeen Mumuni, Issahaku Shirazu, Adolf Kofi Awua","doi":"10.1155/2023/6304219","DOIUrl":"https://doi.org/10.1155/2023/6304219","url":null,"abstract":"<p><strong>Background: </strong>The 3D T1W turbo field echo sequence is a standard imaging method for acquiring high-contrast images of the brain. However, the contrast-to-noise ratio (CNR) can be affected by the turbo factor, which could affect the delineation and segmentation of various structures in the brain and may consequently lead to misdiagnosis. This study is aimed at evaluating the effect of the turbo factor on image quality and volumetric measurement reproducibility in brain magnetic resonance imaging (MRI).</p><p><strong>Methods: </strong>Brain images of five healthy volunteers with no history of neurological diseases were acquired on a 1.5 T MRI scanner with varying turbo factors of 50, 100, 150, 200, and 225. The images were processed and analyzed with FreeSurfer. The influence of the TFE factor on image quality and reproducibility of brain volume measurements was investigated. Image quality metrics assessed included the signal-to-noise ratio (SNR) of white matter (WM), CNR between gray matter/white matter (GM/WM) and gray matter/cerebrospinal fluid (GM/CSF), and Euler number (EN). Moreover, structural brain volume measurements of WM, GM, and CSF were conducted.</p><p><strong>Results: </strong>Turbo factor 200 produced the best SNR (median = 17.01) and GM/WM CNR (median = 2.29), but turbo factor 100 offered the most reproducible SNR (IQR = 2.72) and GM/WM CNR (IQR = 0.14). Turbo factor 50 had the worst and the least reproducible SNR, whereas turbo factor 225 had the worst and the least reproducible GM/WM CNR. Turbo factor 200 again had the best GM/CSF CNR but offered the least reproducible GM/CSF CNR. Turbo factor 225 had the best performance on EN (-21), while turbo factor 200 was next to the most reproducible turbo factor on EN (11). The results showed that turbo factor 200 had the least data acquisition time, in addition to superior performance on SNR, GM/WM CNR, GM/CSF CNR, and good reproducibility characteristics on EN. Both image quality metrics and volumetric measurements did not vary significantly (<i>p</i> > 0.05) with the range of turbo factors used in the study by one-way ANOVA analysis.</p><p><strong>Conclusion: </strong>Since no significant differences were observed in the performance of the turbo factors in terms of image quality and volume of brain structure, turbo factor 200 with a 74% acquisition time reduction was found to be optimal for brain MR imaging at 1.5 T.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2023 ","pages":"6304219"},"PeriodicalIF":7.6,"publicationDate":"2023-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10665095/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138463553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-25eCollection Date: 2023-01-01DOI: 10.1155/2023/8512461
Christopher Liu, Juanjuan Fan, Barbara Bailey, Ralph-Axel Müller, Annika Linke
Functional connectivity MRI (fcMRI) is a technique used to study the functional connectedness of distinct regions of the brain by measuring the temporal correlation between their blood oxygen level-dependent (BOLD) signals. fcMRI is typically measured with the Pearson correlation (PC), which assumes that there is no lag between time series. Dynamic time warping (DTW) is an alternative measure of similarity between time series that is robust to such time lags. We used PC fcMRI data and DTW fcMRI data as predictors in machine learning models for classifying autism spectrum disorder (ASD). When combined with dimension reduction techniques, such as principal component analysis, functional connectivity estimated with DTW showed greater predictive ability than functional connectivity estimated with PC. Our results suggest that DTW fcMRI can be a suitable alternative measure that may be characterizing fcMRI in a different, but complementary, way to PC fcMRI that is worth continued investigation. In studying different variants of cross validation (CV), our results suggest that, when it is necessary to tune model hyperparameters and assess model performance at the same time, a K-fold CV nested within leave-one-out CV may be a competitive contender in terms of performance and computational speed, especially when sample size is not large.
{"title":"Assessing Predictive Ability of Dynamic Time Warping Functional Connectivity for ASD Classification.","authors":"Christopher Liu, Juanjuan Fan, Barbara Bailey, Ralph-Axel Müller, Annika Linke","doi":"10.1155/2023/8512461","DOIUrl":"10.1155/2023/8512461","url":null,"abstract":"<p><p>Functional connectivity MRI (fcMRI) is a technique used to study the functional connectedness of distinct regions of the brain by measuring the temporal correlation between their blood oxygen level-dependent (BOLD) signals. fcMRI is typically measured with the Pearson correlation (PC), which assumes that there is no lag between time series. Dynamic time warping (DTW) is an alternative measure of similarity between time series that is robust to such time lags. We used PC fcMRI data and DTW fcMRI data as predictors in machine learning models for classifying autism spectrum disorder (ASD). When combined with dimension reduction techniques, such as principal component analysis, functional connectivity estimated with DTW showed greater predictive ability than functional connectivity estimated with PC. Our results suggest that DTW fcMRI can be a suitable alternative measure that may be characterizing fcMRI in a different, but complementary, way to PC fcMRI that is worth continued investigation. In studying different variants of cross validation (CV), our results suggest that, when it is necessary to tune model hyperparameters and assess model performance at the same time, a <i>K</i>-fold CV nested within leave-one-out CV may be a competitive contender in terms of performance and computational speed, especially when sample size is not large.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2023 ","pages":"8512461"},"PeriodicalIF":7.6,"publicationDate":"2023-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10620025/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71427758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}