首页 > 最新文献

International Journal of Biomedical Imaging最新文献

英文 中文
Enhanced Myocardial Tissue Visualization: A Comparative Cardiovascular Magnetic Resonance Study of Gradient-Spin Echo-STIR and Conventional STIR Imaging. 增强心肌组织可视化:梯度旋转 Echo-STIR 和传统 STIR 成像的心血管磁共振对比研究。
IF 7.6 Q2 ENGINEERING, BIOMEDICAL Pub Date : 2024-04-01 eCollection Date: 2024-01-01 DOI: 10.1155/2024/8456669
Sadegh Dehghani, Shapoor Shirani, Elahe Jazayeri Gharebagh

Purpose: This study is aimed at evaluating the efficacy of the gradient-spin echo- (GraSE-) based short tau inversion recovery (STIR) sequence (GraSE-STIR) in cardiovascular magnetic resonance (CMR) imaging compared to the conventional turbo spin echo- (TSE-) based STIR sequence, specifically focusing on image quality, specific absorption rate (SAR), and image acquisition time.

Methods: In a prospective study, we examined forty-four normal volunteers and seventeen patients referred for CMR imaging using conventional STIR and GraSE-STIR techniques. Signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), image quality, T2 signal intensity (SI) ratio, SAR, and image acquisition time were compared between both sequences.

Results: GraSE-STIR showed significant improvements in image quality (4.15 ± 0.8 vs. 3.34 ± 0.9, p = 0.024) and cardiac motion artifact reduction (7 vs. 18 out of 53, p = 0.038) compared to conventional STIR. Furthermore, the acquisition time (27.17 ± 3.53 vs. 36.9 ± 4.08 seconds, p = 0.041) and the local torso SAR (<13% vs. <17%, p = 0.047) were significantly lower for GraSE-STIR compared to conventional STIR in short-axis plan. However, no significant differences were shown in T2 SI ratio (p = 0.141), SNR (p = 0.093), CNR (p = 0.068), and SAR (p = 0.071) between these two sequences.

Conclusions: GraSE-STIR offers notable advantages over conventional STIR sequence, with improved image quality, reduced motion artifacts, and shorter acquisition times. These findings highlight the potential of GraSE-STIR as a valuable technique for routine clinical CMR imaging.

目的:本研究旨在评估基于梯度自旋回波(GraSE)的短头绪反转恢复(STIR)序列(GraSE-STIR)与基于传统涡轮自旋回波(TSE)的 STIR 序列相比在心血管磁共振(CMR)成像中的功效,特别关注图像质量、比吸收率(SAR)和图像采集时间:在一项前瞻性研究中,我们使用传统 STIR 和 GraSE-STIR 技术对 44 名正常志愿者和 17 名转诊为 CMR 成像的患者进行了检查。比较了两种序列的信噪比(SNR)、对比度-信噪比(CNR)、图像质量、T2 信号强度(SI)比、SAR 和图像采集时间:与传统 STIR 相比,GraSE-STIR 在图像质量(4.15 ± 0.8 vs. 3.34 ± 0.9,p = 0.024)和心脏运动伪影减少(53 例中有 7 例 vs. 18 例,p = 0.038)方面有明显改善。此外,在短轴平面上,GraSE-STIR 的采集时间(27.17 ± 3.53 对 36.9 ± 4.08 秒,p = 0.041)和局部躯干 SAR(p = 0.047)明显低于传统 STIR。然而,这两种序列在 T2 SI 比值(p = 0.141)、信噪比(p = 0.093)、CNR(p = 0.068)和 SAR(p = 0.071)方面没有明显差异:结论:与传统的 STIR 序列相比,GraSE-STIR 具有明显的优势,图像质量更高,运动伪影更少,采集时间更短。这些发现凸显了 GraSE-STIR 作为常规临床 CMR 成像的重要技术的潜力。
{"title":"Enhanced Myocardial Tissue Visualization: A Comparative Cardiovascular Magnetic Resonance Study of Gradient-Spin Echo-STIR and Conventional STIR Imaging.","authors":"Sadegh Dehghani, Shapoor Shirani, Elahe Jazayeri Gharebagh","doi":"10.1155/2024/8456669","DOIUrl":"https://doi.org/10.1155/2024/8456669","url":null,"abstract":"<p><strong>Purpose: </strong>This study is aimed at evaluating the efficacy of the gradient-spin echo- (GraSE-) based short tau inversion recovery (STIR) sequence (GraSE-STIR) in cardiovascular magnetic resonance (CMR) imaging compared to the conventional turbo spin echo- (TSE-) based STIR sequence, specifically focusing on image quality, specific absorption rate (SAR), and image acquisition time.</p><p><strong>Methods: </strong>In a prospective study, we examined forty-four normal volunteers and seventeen patients referred for CMR imaging using conventional STIR and GraSE-STIR techniques. Signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), image quality, <i>T</i><sub>2</sub> signal intensity (SI) ratio, SAR, and image acquisition time were compared between both sequences.</p><p><strong>Results: </strong>GraSE-STIR showed significant improvements in image quality (4.15 ± 0.8 vs. 3.34 ± 0.9, <i>p</i> = 0.024) and cardiac motion artifact reduction (7 vs. 18 out of 53, <i>p</i> = 0.038) compared to conventional STIR. Furthermore, the acquisition time (27.17 ± 3.53 vs. 36.9 ± 4.08 seconds, <i>p</i> = 0.041) and the local torso SAR (<13% vs. <17%, <i>p</i> = 0.047) were significantly lower for GraSE-STIR compared to conventional STIR in short-axis plan. However, no significant differences were shown in <i>T</i><sub>2</sub> SI ratio (<i>p</i> = 0.141), SNR (<i>p</i> = 0.093), CNR (<i>p</i> = 0.068), and SAR (<i>p</i> = 0.071) between these two sequences.</p><p><strong>Conclusions: </strong>GraSE-STIR offers notable advantages over conventional STIR sequence, with improved image quality, reduced motion artifacts, and shorter acquisition times. These findings highlight the potential of GraSE-STIR as a valuable technique for routine clinical CMR imaging.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2024 ","pages":"8456669"},"PeriodicalIF":7.6,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11001468/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140871905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Detecting MRI-Invisible Prostate Cancers Using a Weakly Supervised Deep Learning Model. 利用弱监督深度学习模型检测核磁共振成像看不见的前列腺癌
IF 7.6 Q2 ENGINEERING, BIOMEDICAL Pub Date : 2024-03-19 eCollection Date: 2024-01-01 DOI: 10.1155/2024/2741986
Yao Zheng, Jingliang Zhang, Dong Huang, Xiaoshuo Hao, Weijun Qin, Yang Liu

Background: MRI is an important tool for accurate detection and targeted biopsy of prostate lesions. However, the imaging appearances of some prostate cancers are similar to those of the surrounding normal tissue on MRI, which are referred to as MRI-invisible prostate cancers (MIPCas). The detection of MIPCas remains challenging and requires extensive systematic biopsy for identification. In this study, we developed a weakly supervised UNet (WSUNet) to detect MIPCas.

Methods: The study included 777 patients (training set: 600; testing set: 177), all of them underwent comprehensive prostate biopsies using an MRI-ultrasound fusion system. MIPCas were identified in MRI based on the Gleason grade (≥7) from known systematic biopsy results.

Results: The WSUNet model underwent validation through systematic biopsy in the testing set with an AUC of 0.764 (95% CI: 0.728-0.798). Furthermore, WSUNet exhibited a statistically significant precision improvement of 91.3% (p < 0.01) over conventional systematic biopsy methods in the testing set. This improvement resulted in a substantial 47.6% (p < 0.01) decrease in unnecessary biopsy needles, while maintaining the same number of positively identified cores as in the original systematic biopsy.

Conclusions: In conclusion, the proposed WSUNet could effectively detect MIPCas, thereby reducing unnecessary biopsies.

背景:磁共振成像是准确检测前列腺病变并进行有针对性活检的重要工具。然而,一些前列腺癌在核磁共振成像上的表现与周围正常组织相似,被称为核磁共振成像不可见前列腺癌(MIPCas)。MIPCas 的检测仍具有挑战性,需要进行广泛的系统活检才能识别。在这项研究中,我们开发了一种弱监督前列腺癌检测网络(WSUNet)来检测前列腺癌:研究纳入了 777 例患者(训练集:600 例;测试集:177 例),所有患者均使用核磁共振成像-超声波融合系统进行了全面的前列腺活检。根据已知系统活检结果中的格里森分级(≥7级)在核磁共振成像中识别出MIPCas:WSUNet模型通过测试集中的系统活检进行了验证,其AUC为0.764(95% CI:0.728-0.798)。此外,在测试集中,WSUNet 比传统的系统活检方法在统计学上显著提高了 91.3%(p < 0.01)。这一改进使不必要的活检针大幅减少了 47.6% (p < 0.01),同时保持了与原始系统性活检相同的阳性鉴定核心数量:总之,拟议的 WSUNet 能有效检测 MIPCas,从而减少不必要的活检。
{"title":"Detecting MRI-Invisible Prostate Cancers Using a Weakly Supervised Deep Learning Model.","authors":"Yao Zheng, Jingliang Zhang, Dong Huang, Xiaoshuo Hao, Weijun Qin, Yang Liu","doi":"10.1155/2024/2741986","DOIUrl":"10.1155/2024/2741986","url":null,"abstract":"<p><strong>Background: </strong>MRI is an important tool for accurate detection and targeted biopsy of prostate lesions. However, the imaging appearances of some prostate cancers are similar to those of the surrounding normal tissue on MRI, which are referred to as MRI-invisible prostate cancers (MIPCas). The detection of MIPCas remains challenging and requires extensive systematic biopsy for identification. In this study, we developed a weakly supervised UNet (WSUNet) to detect MIPCas.</p><p><strong>Methods: </strong>The study included 777 patients (training set: 600; testing set: 177), all of them underwent comprehensive prostate biopsies using an MRI-ultrasound fusion system. MIPCas were identified in MRI based on the Gleason grade (≥7) from known systematic biopsy results.</p><p><strong>Results: </strong>The WSUNet model underwent validation through systematic biopsy in the testing set with an AUC of 0.764 (95% CI: 0.728-0.798). Furthermore, WSUNet exhibited a statistically significant precision improvement of 91.3% (<i>p</i> < 0.01) over conventional systematic biopsy methods in the testing set. This improvement resulted in a substantial 47.6% (<i>p</i> < 0.01) decrease in unnecessary biopsy needles, while maintaining the same number of positively identified cores as in the original systematic biopsy.</p><p><strong>Conclusions: </strong>In conclusion, the proposed WSUNet could effectively detect MIPCas, thereby reducing unnecessary biopsies.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2024 ","pages":"2741986"},"PeriodicalIF":7.6,"publicationDate":"2024-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10965281/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140294947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Empowering Radiographers: A Call for Integrated AI Training in University Curricula. 增强放射技师的能力:呼吁在大学课程中纳入人工智能培训。
IF 7.6 Q2 ENGINEERING, BIOMEDICAL Pub Date : 2024-03-08 eCollection Date: 2024-01-01 DOI: 10.1155/2024/7001343
Mohammad A Rawashdeh, Sara Almazrouei, Maha Zaitoun, Praveen Kumar, Charbel Saade

Background: Artificial intelligence (AI) applications are rapidly advancing in the field of medical imaging. This study is aimed at investigating the perception and knowledge of radiographers towards artificial intelligence.

Methods: An online survey employing Google Forms consisting of 20 questions regarding the radiographers' perception of AI. The questionnaire was divided into two parts. The first part consisted of demographic information as well as whether the participants think AI should be part of medical training, their previous knowledge of the technologies used in AI, and whether they prefer to receive training on AI. The second part of the questionnaire consisted of two fields. The first one consisted of 16 questions regarding radiographers' perception of AI applications in radiology. Descriptive analysis and logistic regression analysis were used to evaluate the effect of gender on the items of the questionnaire.

Results: Familiarity with AI was low, with only 52 out of 100 respondents (52%) reporting good familiarity with AI. Many participants considered AI useful in the medical field (74%). The findings of the study demonstrate that nearly most of the participants (98%) believed that AI should be integrated into university education, with 87% of the respondents preferring to receive training on AI, with some already having prior knowledge of AI used in technologies. The logistic regression analysis indicated a significant association between male gender and experience within the range of 23-27 years with the degree of familiarity with AI technology, exhibiting respective odds ratios of 1.89 (COR = 1.89) and 1.87 (COR = 1.87).

Conclusions: This study suggests that medical practices have a favorable attitude towards AI in the radiology field. Most participants surveyed believed that AI should be part of radiography education. AI training programs for undergraduate and postgraduate radiographers may be necessary to prepare them for AI tools in radiology development.

背景:人工智能(AI)的应用正在医学影像领域迅速发展。本研究旨在调查放射技师对人工智能的看法和知识:方法:采用谷歌表格进行在线调查,其中包括 20 个有关放射技师对人工智能认知的问题。问卷分为两部分。第一部分包括人口统计学信息、参与者是否认为人工智能应成为医学培训的一部分、他们以前对人工智能所用技术的了解以及他们是否愿意接受人工智能培训。问卷的第二部分包括两个栏目。第一部分包括16个问题,涉及放射技师对人工智能在放射学中应用的看法。我们采用了描述性分析和逻辑回归分析来评估性别对问卷项目的影响:对人工智能的熟悉程度很低,100 名受访者中只有 52 人(52%)表示对人工智能非常熟悉。许多参与者认为人工智能在医疗领域很有用(74%)。研究结果表明,几乎大多数参与者(98%)都认为应将人工智能纳入大学教育,其中 87% 的受访者倾向于接受人工智能培训,部分受访者已经对技术中使用的人工智能有所了解。逻辑回归分析表明,男性性别和 23-27 年的工作经验与对人工智能技术的熟悉程度之间存在显著关联,各自的几率比为 1.89(COR = 1.89)和 1.87(COR = 1.87):本研究表明,医疗机构对放射学领域的人工智能持积极态度。大多数受访者认为,人工智能应成为放射学教育的一部分。可能有必要为本科生和研究生放射技师提供人工智能培训课程,使他们为放射学发展中的人工智能工具做好准备。
{"title":"Empowering Radiographers: A Call for Integrated AI Training in University Curricula.","authors":"Mohammad A Rawashdeh, Sara Almazrouei, Maha Zaitoun, Praveen Kumar, Charbel Saade","doi":"10.1155/2024/7001343","DOIUrl":"10.1155/2024/7001343","url":null,"abstract":"<p><strong>Background: </strong>Artificial intelligence (AI) applications are rapidly advancing in the field of medical imaging. This study is aimed at investigating the perception and knowledge of radiographers towards artificial intelligence.</p><p><strong>Methods: </strong>An online survey employing Google Forms consisting of 20 questions regarding the radiographers' perception of AI. The questionnaire was divided into two parts. The first part consisted of demographic information as well as whether the participants think AI should be part of medical training, their previous knowledge of the technologies used in AI, and whether they prefer to receive training on AI. The second part of the questionnaire consisted of two fields. The first one consisted of 16 questions regarding radiographers' perception of AI applications in radiology. Descriptive analysis and logistic regression analysis were used to evaluate the effect of gender on the items of the questionnaire.</p><p><strong>Results: </strong>Familiarity with AI was low, with only 52 out of 100 respondents (52%) reporting good familiarity with AI. Many participants considered AI useful in the medical field (74%). The findings of the study demonstrate that nearly most of the participants (98%) believed that AI should be integrated into university education, with 87% of the respondents preferring to receive training on AI, with some already having prior knowledge of AI used in technologies. The logistic regression analysis indicated a significant association between male gender and experience within the range of 23-27 years with the degree of familiarity with AI technology, exhibiting respective odds ratios of 1.89 (COR = 1.89) and 1.87 (COR = 1.87).</p><p><strong>Conclusions: </strong>This study suggests that medical practices have a favorable attitude towards AI in the radiology field. Most participants surveyed believed that AI should be part of radiography education. AI training programs for undergraduate and postgraduate radiographers may be necessary to prepare them for AI tools in radiology development.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2024 ","pages":"7001343"},"PeriodicalIF":7.6,"publicationDate":"2024-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10942819/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140144318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Facile Conversion and Optimization of Structured Illumination Image Reconstruction Code into the GPU Environment. 将结构光照图像重构代码便捷地转换和优化到 GPU 环境中。
IF 7.6 Q2 ENGINEERING, BIOMEDICAL Pub Date : 2024-02-28 eCollection Date: 2024-01-01 DOI: 10.1155/2024/8862387
Kwangsung Oh, Piero R Bianco

Superresolution, structured illumination microscopy (SIM) is an ideal modality for imaging live cells due to its relatively high speed and low photon-induced damage to the cells. The rate-limiting step in observing a superresolution image in SIM is often the reconstruction speed of the algorithm used to form a single image from as many as nine raw images. Reconstruction algorithms impose a significant computing burden due to an intricate workflow and a large number of often complex calculations to produce the final image. Further adding to the computing burden is that the code, even within the MATLAB environment, can be inefficiently written by microscopists who are noncomputer science researchers. In addition, they do not take into consideration the processing power of the graphics processing unit (GPU) of the computer. To address these issues, we present simple but efficient approaches to first revise MATLAB code, followed by conversion to GPU-optimized code. When combined with cost-effective, high-performance GPU-enabled computers, a 4- to 500-fold improvement in algorithm execution speed is observed as shown for the image denoising Hessian-SIM algorithm. Importantly, the improved algorithm produces images identical in quality to the original.

超分辨率结构照明显微镜(SIM)是一种理想的活细胞成像模式,因为它速度相对较快,对细胞造成的光子损伤较小。在 SIM 中观察超分辨率图像的速度极限通常是用于从多达九幅原始图像中生成单幅图像的算法的重建速度。重建算法需要复杂的工作流程和大量复杂的计算才能生成最终图像,这给计算带来了巨大的负担。进一步加重计算负担的是,即使是在 MATLAB 环境中,非计算机科学研究人员的显微镜专家编写的代码也可能效率低下。此外,他们也没有考虑到计算机图形处理器(GPU)的处理能力。为了解决这些问题,我们提出了简单而高效的方法,首先修改 MATLAB 代码,然后转换为 GPU 优化代码。如图像去噪 Hessian-SIM 算法所示,当与具有成本效益的高性能 GPU 计算机结合使用时,算法执行速度可提高 4 到 500 倍。重要的是,改进后的算法生成的图像质量与原始图像相同。
{"title":"Facile Conversion and Optimization of Structured Illumination Image Reconstruction Code into the GPU Environment.","authors":"Kwangsung Oh, Piero R Bianco","doi":"10.1155/2024/8862387","DOIUrl":"10.1155/2024/8862387","url":null,"abstract":"<p><p>Superresolution, structured illumination microscopy (SIM) is an ideal modality for imaging live cells due to its relatively high speed and low photon-induced damage to the cells. The rate-limiting step in observing a superresolution image in SIM is often the reconstruction speed of the algorithm used to form a single image from as many as nine raw images. Reconstruction algorithms impose a significant computing burden due to an intricate workflow and a large number of often complex calculations to produce the final image. Further adding to the computing burden is that the code, even within the MATLAB environment, can be inefficiently written by microscopists who are noncomputer science researchers. In addition, they do not take into consideration the processing power of the graphics processing unit (GPU) of the computer. To address these issues, we present simple but efficient approaches to first revise MATLAB code, followed by conversion to GPU-optimized code. When combined with cost-effective, high-performance GPU-enabled computers, a 4- to 500-fold improvement in algorithm execution speed is observed as shown for the image denoising Hessian-SIM algorithm. Importantly, the improved algorithm produces images identical in quality to the original.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2024 ","pages":"8862387"},"PeriodicalIF":7.6,"publicationDate":"2024-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10917484/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140050652","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
White Matter Fiber Tracking Method with Adaptive Correction of Tracking Direction. 自适应修正追踪方向的白质纤维追踪法
IF 7.6 Q2 ENGINEERING, BIOMEDICAL Pub Date : 2024-02-05 eCollection Date: 2024-01-01 DOI: 10.1155/2024/4102461
Qian Zheng, Kefu Guo, Yinghui Meng, Jiaofen Nan, Lin Xu

Background: The deterministic fiber tracking method has the advantage of high computational efficiency and good repeatability, making it suitable for the noninvasive estimation of brain structural connectivity in clinical fields. To address the issue of the current classical deterministic method tending to deviate in the tracking direction in the region of crossing fiber region, in this paper, we propose an adaptive correction-based deterministic white matter fiber tracking method, named FTACTD.

Methods: The proposed FTACTD method can accurately track white matter fibers by adaptively adjusting the deflection direction strategy based on the tensor matrix and the input fiber direction of adjacent voxels. The degree of correction direction changes adaptively according to the shape of the diffusion tensor, mimicking the actual tracking deflection angle and direction. Furthermore, both forward and reverse tracking techniques are employed to track the entire fiber. The effectiveness of the proposed method is validated and quantified using both simulated and real brain datasets. Various indicators such as invalid bundles (IB), valid bundles (VB), invalid connections (IC), no connections (NC), and valid connections (VC) are utilized to assess the performance of the proposed method on simulated data and real diffusion-weighted imaging (DWI) data.

Results: The experimental results of the simulated data show that the FTACTD method tracks outperform existing methods, achieving the highest number of VB with a total of 13 bundles. Additionally, it identifies the least number of incorrect fiber bundles, with only 32 bundles identified as wrong. Compared to the FACT method, the FTACTD method reduces the number of NC by 36.38%. In terms of VC, the FTACTD method surpasses even the best performing SD_Stream method among deterministic methods by 1.64%. Extensive in vivo experiments demonstrate the superiority of the proposed method in terms of tracking more accurate and complete fiber paths, resulting in improved continuity.

Conclusion: The FTACTD method proposed in this study indicates superior tracking results and provides a methodological basis for the investigating, diagnosis, and treatment of brain disorders associated with white matter fiber deficits and abnormalities.

背景:确定性纤维追踪方法具有计算效率高、重复性好等优点,适用于临床领域对大脑结构连接性的无创估计。针对目前经典确定性方法在交叉纤维区域的追踪方向容易出现偏差的问题,本文提出了一种基于自适应校正的确定性白质纤维追踪方法,命名为 FTACTD:方法:本文提出的 FTACTD 方法可以根据张量矩阵和相邻体素的输入纤维方向自适应地调整偏转方向策略,从而准确地跟踪白质纤维。校正方向的程度根据扩散张量的形状自适应变化,模仿实际的追踪偏转角度和方向。此外,还采用了正向和反向跟踪技术来跟踪整个纤维。利用模拟和真实的大脑数据集对所提出方法的有效性进行了验证和量化。利用各种指标,如无效束(IB)、有效束(VB)、无效连接(IC)、无连接(NC)和有效连接(VC),来评估拟议方法在模拟数据和真实扩散加权成像(DWI)数据上的性能:模拟数据的实验结果表明,FTACTD 方法的轨迹优于现有方法,获得了最多的 VB(共 13 个束)。此外,该方法识别出的错误光纤束数量最少,仅有 32 个光纤束被识别为错误。与 FACT 方法相比,FTACTD 方法减少了 36.38% 的 NC 数量。在 VC 方面,FTACTD 方法甚至比确定性方法中性能最好的 SD_Stream 方法高出 1.64%。广泛的体内实验证明了所提出的方法在跟踪更准确、更完整的纤维路径方面的优越性,从而改善了连续性:结论:本研究提出的 FTACTD 方法显示出卓越的追踪效果,为调查、诊断和治疗与白质纤维缺失和异常相关的脑部疾病提供了方法论基础。
{"title":"White Matter Fiber Tracking Method with Adaptive Correction of Tracking Direction.","authors":"Qian Zheng, Kefu Guo, Yinghui Meng, Jiaofen Nan, Lin Xu","doi":"10.1155/2024/4102461","DOIUrl":"10.1155/2024/4102461","url":null,"abstract":"<p><strong>Background: </strong>The deterministic fiber tracking method has the advantage of high computational efficiency and good repeatability, making it suitable for the noninvasive estimation of brain structural connectivity in clinical fields. To address the issue of the current classical deterministic method tending to deviate in the tracking direction in the region of crossing fiber region, in this paper, we propose an adaptive correction-based deterministic white matter fiber tracking method, named FTACTD.</p><p><strong>Methods: </strong>The proposed FTACTD method can accurately track white matter fibers by adaptively adjusting the deflection direction strategy based on the tensor matrix and the input fiber direction of adjacent voxels. The degree of correction direction changes adaptively according to the shape of the diffusion tensor, mimicking the actual tracking deflection angle and direction. Furthermore, both forward and reverse tracking techniques are employed to track the entire fiber. The effectiveness of the proposed method is validated and quantified using both simulated and real brain datasets. Various indicators such as invalid bundles (IB), valid bundles (VB), invalid connections (IC), no connections (NC), and valid connections (VC) are utilized to assess the performance of the proposed method on simulated data and real diffusion-weighted imaging (DWI) data.</p><p><strong>Results: </strong>The experimental results of the simulated data show that the FTACTD method tracks outperform existing methods, achieving the highest number of VB with a total of 13 bundles. Additionally, it identifies the least number of incorrect fiber bundles, with only 32 bundles identified as wrong. Compared to the FACT method, the FTACTD method reduces the number of NC by 36.38%. In terms of VC, the FTACTD method surpasses even the best performing SD_Stream method among deterministic methods by 1.64%. Extensive in vivo experiments demonstrate the superiority of the proposed method in terms of tracking more accurate and complete fiber paths, resulting in improved continuity.</p><p><strong>Conclusion: </strong>The FTACTD method proposed in this study indicates superior tracking results and provides a methodological basis for the investigating, diagnosis, and treatment of brain disorders associated with white matter fiber deficits and abnormalities.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2024 ","pages":"4102461"},"PeriodicalIF":7.6,"publicationDate":"2024-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10861278/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139724434","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Skin Cancer Segmentation and Classification Using Vision Transformer for Automatic Analysis in Dermatoscopy-Based Noninvasive Digital System. 基于皮肤镜的无创数字系统中使用视觉变换器自动分析的皮肤癌分割和分类。
IF 7.6 Q2 ENGINEERING, BIOMEDICAL Pub Date : 2024-02-03 eCollection Date: 2024-01-01 DOI: 10.1155/2024/3022192
Galib Muhammad Shahriar Himel, Md Masudul Islam, Kh Abdullah Al-Aff, Shams Ibne Karim, Md Kabir Uddin Sikder

Skin cancer is a significant health concern worldwide, and early and accurate diagnosis plays a crucial role in improving patient outcomes. In recent years, deep learning models have shown remarkable success in various computer vision tasks, including image classification. In this research study, we introduce an approach for skin cancer classification using vision transformer, a state-of-the-art deep learning architecture that has demonstrated exceptional performance in diverse image analysis tasks. The study utilizes the HAM10000 dataset; a publicly available dataset comprising 10,015 skin lesion images classified into two categories: benign (6705 images) and malignant (3310 images). This dataset consists of high-resolution images captured using dermatoscopes and carefully annotated by expert dermatologists. Preprocessing techniques, such as normalization and augmentation, are applied to enhance the robustness and generalization of the model. The vision transformer architecture is adapted to the skin cancer classification task. The model leverages the self-attention mechanism to capture intricate spatial dependencies and long-range dependencies within the images, enabling it to effectively learn relevant features for accurate classification. Segment Anything Model (SAM) is employed to segment the cancerous areas from the images; achieving an IOU of 96.01% and Dice coefficient of 98.14% and then various pretrained models are used for classification using vision transformer architecture. Extensive experiments and evaluations are conducted to assess the performance of our approach. The results demonstrate the superiority of the vision transformer model over traditional deep learning architectures in skin cancer classification in general with some exceptions. Upon experimenting on six different models, ViT-Google, ViT-MAE, ViT-ResNet50, ViT-VAN, ViT-BEiT, and ViT-DiT, we found out that the ML approach achieves 96.15% accuracy using Google's ViT patch-32 model with a low false negative ratio on the test dataset, showcasing its potential as an effective tool for aiding dermatologists in the diagnosis of skin cancer.

皮肤癌是全球关注的重大健康问题,早期准确诊断对改善患者预后起着至关重要的作用。近年来,深度学习模型在包括图像分类在内的各种计算机视觉任务中取得了显著成功。在这项研究中,我们介绍了一种利用视觉转换器进行皮肤癌分类的方法,这是一种最先进的深度学习架构,在各种图像分析任务中都表现出了卓越的性能。这项研究利用了 HAM10000 数据集;这是一个公开可用的数据集,包含 10015 张皮肤病变图像,分为良性(6705 张)和恶性(3310 张)两类。该数据集由使用皮肤镜拍摄的高分辨率图像组成,并由皮肤科专家仔细标注。为了增强模型的鲁棒性和通用性,我们采用了归一化和增强等预处理技术。视觉转换器架构适用于皮肤癌分类任务。该模型利用自我注意机制捕捉图像中错综复杂的空间依赖关系和长程依赖关系,从而有效地学习相关特征,实现准确分类。采用 "任意分割模型"(SAM)对图像中的癌症区域进行分割,IOU 达到 96.01%,Dice 系数达到 98.14%,然后使用视觉转换器架构将各种预训练模型用于分类。为了评估我们方法的性能,我们进行了广泛的实验和评估。结果表明,在皮肤癌分类中,视觉转换器模型总体上优于传统的深度学习架构,但也有一些例外。在对 ViT-Google、ViT-MAE、ViT-ResNet50、ViT-VAN、ViT-BEiT 和 ViT-DiT 六种不同模型进行实验后,我们发现,使用谷歌的 ViT patch-32 模型,ML 方法在测试数据集上实现了 96.15% 的准确率和较低的假阴性率,展示了其作为辅助皮肤科医生诊断皮肤癌的有效工具的潜力。
{"title":"Skin Cancer Segmentation and Classification Using Vision Transformer for Automatic Analysis in Dermatoscopy-Based Noninvasive Digital System.","authors":"Galib Muhammad Shahriar Himel, Md Masudul Islam, Kh Abdullah Al-Aff, Shams Ibne Karim, Md Kabir Uddin Sikder","doi":"10.1155/2024/3022192","DOIUrl":"https://doi.org/10.1155/2024/3022192","url":null,"abstract":"<p><p>Skin cancer is a significant health concern worldwide, and early and accurate diagnosis plays a crucial role in improving patient outcomes. In recent years, deep learning models have shown remarkable success in various computer vision tasks, including image classification. In this research study, we introduce an approach for skin cancer classification using vision transformer, a state-of-the-art deep learning architecture that has demonstrated exceptional performance in diverse image analysis tasks. The study utilizes the HAM10000 dataset; a publicly available dataset comprising 10,015 skin lesion images classified into two categories: benign (6705 images) and malignant (3310 images). This dataset consists of high-resolution images captured using dermatoscopes and carefully annotated by expert dermatologists. Preprocessing techniques, such as normalization and augmentation, are applied to enhance the robustness and generalization of the model. The vision transformer architecture is adapted to the skin cancer classification task. The model leverages the self-attention mechanism to capture intricate spatial dependencies and long-range dependencies within the images, enabling it to effectively learn relevant features for accurate classification. Segment Anything Model (SAM) is employed to segment the cancerous areas from the images; achieving an IOU of 96.01% and Dice coefficient of 98.14% and then various pretrained models are used for classification using vision transformer architecture. Extensive experiments and evaluations are conducted to assess the performance of our approach. The results demonstrate the superiority of the vision transformer model over traditional deep learning architectures in skin cancer classification in general with some exceptions. Upon experimenting on six different models, ViT-Google, ViT-MAE, ViT-ResNet50, ViT-VAN, ViT-BEiT, and ViT-DiT, we found out that the ML approach achieves 96.15% accuracy using Google's ViT patch-32 model with a low false negative ratio on the test dataset, showcasing its potential as an effective tool for aiding dermatologists in the diagnosis of skin cancer.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2024 ","pages":"3022192"},"PeriodicalIF":7.6,"publicationDate":"2024-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10858797/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139724433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Segmentation of Dynamic Total-Body [18F]-FDG PET Images Using Unsupervised Clustering. 使用无监督聚类对动态全身 [18F]-FDG PET 图像进行分割。
IF 3.3 Q2 ENGINEERING, BIOMEDICAL Pub Date : 2023-12-05 eCollection Date: 2023-01-01 DOI: 10.1155/2023/3819587
Maria K Jaakkola, Maria Rantala, Anna Jalo, Teemu Saari, Jaakko Hentilä, Jatta S Helin, Tuuli A Nissinen, Olli Eskola, Johan Rajander, Kirsi A Virtanen, Jarna C Hannukainen, Francisco López-Picón, Riku Klén
<p><p>Clustering time activity curves of PET images have been used to separate clinically relevant areas of the brain or tumours. However, PET image segmentation in multiorgan level is much less studied due to the available total-body data being limited to animal studies. Now, the new PET scanners providing the opportunity to acquire total-body PET scans also from humans are becoming more common, which opens plenty of new clinically interesting opportunities. Therefore, organ-level segmentation of PET images has important applications, yet it lacks sufficient research. In this proof of concept study, we evaluate if the previously used segmentation approaches are suitable for segmenting dynamic human total-body PET images in organ level. Our focus is on general-purpose unsupervised methods that are independent of external data and can be used for all tracers, organisms, and health conditions. Additional anatomical image modalities, such as CT or MRI, are not used, but the segmentation is done purely based on the dynamic PET images. The tested methods are commonly used building blocks of the more sophisticated methods rather than final methods as such, and our goal is to evaluate if these basic tools are suited for the arising human total-body PET image segmentation. First, we excluded methods that were computationally too demanding for the large datasets from human total-body PET scanners. These criteria filtered out most of the commonly used approaches, leaving only two clustering methods, <i>k</i>-means and Gaussian mixture model (GMM), for further analyses. We combined <i>k</i>-means with two different preprocessing approaches, namely, principal component analysis (PCA) and independent component analysis (ICA). Then, we selected a suitable number of clusters using 10 images. Finally, we tested how well the usable approaches segment the remaining PET images in organ level, highlight the best approaches together with their limitations, and discuss how further research could tackle the observed shortcomings. In this study, we utilised 40 total-body [<sup>18</sup>F] fluorodeoxyglucose PET images of rats to mimic the coming large human PET images and a few actual human total-body images to ensure that our conclusions from the rat data generalise to the human data. Our results show that ICA combined with <i>k</i>-means has weaker performance than the other two computationally usable approaches and that certain organs are easier to segment than others. While GMM performed sufficiently, it was by far the slowest one among the tested approaches, making <i>k</i>-means combined with PCA the most promising candidate for further development. However, even with the best methods, the mean Jaccard index was slightly below 0.5 for the easiest tested organ and below 0.2 for the most challenging organ. Thus, we conclude that there is a lack of accurate and computationally light general-purpose segmentation method that can analyse dynamic total-body PET images.</p
对 PET 图像的时间活动曲线进行聚类已被用于分离临床相关的大脑或肿瘤区域。然而,由于现有的全身数据仅限于动物研究,因此对多器官水平 PET 图像分割的研究要少得多。现在,新型 PET 扫描仪越来越普遍,可以获取人体全身 PET 扫描数据,这为临床带来了许多新的有趣机会。因此,PET 图像的器官级分割具有重要的应用价值,但目前还缺乏足够的研究。在这项概念验证研究中,我们评估了之前使用的分割方法是否适用于器官级动态人体全身 PET 图像的分割。我们的重点是通用无监督方法,这些方法不受外部数据的影响,可用于所有示踪剂、生物体和健康状况。我们不使用 CT 或 MRI 等其他解剖图像模式,而是纯粹根据动态 PET 图像进行分割。我们的目标是评估这些基本工具是否适合所产生的人体全身 PET 图像分割。首先,我们排除了对人体全身 PET 扫描仪的大型数据集计算要求过高的方法。这些标准过滤掉了大多数常用的方法,只留下两种聚类方法,即 k-means 和高斯混合模型 (GMM),供进一步分析。我们将 k-means 与两种不同的预处理方法相结合,即主成分分析 (PCA) 和独立成分分析 (ICA)。然后,我们使用 10 幅图像选择了合适的聚类数量。最后,我们测试了可用方法在器官水平上分割剩余 PET 图像的效果,强调了最佳方法及其局限性,并讨论了如何通过进一步研究解决观察到的不足之处。在这项研究中,我们使用了 40 幅大鼠全身 [18F] 氟脱氧葡萄糖 PET 图像来模拟即将出现的大型人体 PET 图像,并使用了一些实际的人体全身图像,以确保我们从大鼠数据中得出的结论能够推广到人体数据中。我们的结果表明,ICA 结合 k-means 的性能比其他两种可计算的方法要弱,而且某些器官比其他器官更容易分割。虽然 GMM 的性能足够好,但它是迄今为止测试方法中速度最慢的一种,因此结合 PCA 的 k-means 是最有希望进一步发展的候选方法。不过,即使使用了最好的方法,最容易测试的器官的平均 Jaccard 指数也略低于 0.5,而最具挑战性的器官的平均 Jaccard 指数则低于 0.2。因此,我们得出结论,目前还缺乏一种能分析动态全身 PET 图像的准确且计算量小的通用分割方法。
{"title":"Segmentation of Dynamic Total-Body [<sup>18</sup>F]-FDG PET Images Using Unsupervised Clustering.","authors":"Maria K Jaakkola, Maria Rantala, Anna Jalo, Teemu Saari, Jaakko Hentilä, Jatta S Helin, Tuuli A Nissinen, Olli Eskola, Johan Rajander, Kirsi A Virtanen, Jarna C Hannukainen, Francisco López-Picón, Riku Klén","doi":"10.1155/2023/3819587","DOIUrl":"10.1155/2023/3819587","url":null,"abstract":"&lt;p&gt;&lt;p&gt;Clustering time activity curves of PET images have been used to separate clinically relevant areas of the brain or tumours. However, PET image segmentation in multiorgan level is much less studied due to the available total-body data being limited to animal studies. Now, the new PET scanners providing the opportunity to acquire total-body PET scans also from humans are becoming more common, which opens plenty of new clinically interesting opportunities. Therefore, organ-level segmentation of PET images has important applications, yet it lacks sufficient research. In this proof of concept study, we evaluate if the previously used segmentation approaches are suitable for segmenting dynamic human total-body PET images in organ level. Our focus is on general-purpose unsupervised methods that are independent of external data and can be used for all tracers, organisms, and health conditions. Additional anatomical image modalities, such as CT or MRI, are not used, but the segmentation is done purely based on the dynamic PET images. The tested methods are commonly used building blocks of the more sophisticated methods rather than final methods as such, and our goal is to evaluate if these basic tools are suited for the arising human total-body PET image segmentation. First, we excluded methods that were computationally too demanding for the large datasets from human total-body PET scanners. These criteria filtered out most of the commonly used approaches, leaving only two clustering methods, &lt;i&gt;k&lt;/i&gt;-means and Gaussian mixture model (GMM), for further analyses. We combined &lt;i&gt;k&lt;/i&gt;-means with two different preprocessing approaches, namely, principal component analysis (PCA) and independent component analysis (ICA). Then, we selected a suitable number of clusters using 10 images. Finally, we tested how well the usable approaches segment the remaining PET images in organ level, highlight the best approaches together with their limitations, and discuss how further research could tackle the observed shortcomings. In this study, we utilised 40 total-body [&lt;sup&gt;18&lt;/sup&gt;F] fluorodeoxyglucose PET images of rats to mimic the coming large human PET images and a few actual human total-body images to ensure that our conclusions from the rat data generalise to the human data. Our results show that ICA combined with &lt;i&gt;k&lt;/i&gt;-means has weaker performance than the other two computationally usable approaches and that certain organs are easier to segment than others. While GMM performed sufficiently, it was by far the slowest one among the tested approaches, making &lt;i&gt;k&lt;/i&gt;-means combined with PCA the most promising candidate for further development. However, even with the best methods, the mean Jaccard index was slightly below 0.5 for the easiest tested organ and below 0.2 for the most challenging organ. Thus, we conclude that there is a lack of accurate and computationally light general-purpose segmentation method that can analyse dynamic total-body PET images.&lt;/p","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2023 ","pages":"3819587"},"PeriodicalIF":3.3,"publicationDate":"2023-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10715853/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138804116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatic Detection of AMD and DME Retinal Pathologies Using Deep Learning. 利用深度学习自动检测AMD和DME视网膜病变。
IF 7.6 Q2 ENGINEERING, BIOMEDICAL Pub Date : 2023-11-24 eCollection Date: 2023-01-01 DOI: 10.1155/2023/9966107
Latifa Saidi, Hajer Jomaa, Haddad Zainab, Hsouna Zgolli, Sonia Mabrouk, Désiré Sidibé, Hedi Tabia, Nawres Khlifa

Diabetic macular edema (DME) and age-related macular degeneration (AMD) are two common eye diseases. They are often undiagnosed or diagnosed late. This can result in permanent and irreversible vision loss. Therefore, early detection and treatment of these diseases can prevent vision loss, save money, and provide a better quality of life for individuals. Optical coherence tomography (OCT) imaging is widely applied to identify eye diseases, including DME and AMD. In this work, we developed automatic deep learning-based methods to detect these pathologies using SD-OCT scans. The convolutional neural network (CNN) from scratch we developed gave the best classification score with an accuracy higher than 99% on Duke dataset of OCT images.

糖尿病性黄斑水肿(DME)和老年性黄斑变性(AMD)是两种常见的眼病。它们通常未被诊断或诊断较晚。这可能导致永久性和不可逆转的视力丧失。因此,早期发现和治疗这些疾病可以预防视力下降,节省资金,并为个人提供更好的生活质量。光学相干断层扫描(OCT)被广泛应用于眼病的识别,包括DME和AMD。在这项工作中,我们开发了基于自动深度学习的方法,使用SD-OCT扫描来检测这些病理。我们从零开始开发的卷积神经网络(CNN)在杜克大学OCT图像数据集上的分类得分最高,准确率超过99%。
{"title":"Automatic Detection of AMD and DME Retinal Pathologies Using Deep Learning.","authors":"Latifa Saidi, Hajer Jomaa, Haddad Zainab, Hsouna Zgolli, Sonia Mabrouk, Désiré Sidibé, Hedi Tabia, Nawres Khlifa","doi":"10.1155/2023/9966107","DOIUrl":"10.1155/2023/9966107","url":null,"abstract":"<p><p>Diabetic macular edema (DME) and age-related macular degeneration (AMD) are two common eye diseases. They are often undiagnosed or diagnosed late. This can result in permanent and irreversible vision loss. Therefore, early detection and treatment of these diseases can prevent vision loss, save money, and provide a better quality of life for individuals. Optical coherence tomography (OCT) imaging is widely applied to identify eye diseases, including DME and AMD. In this work, we developed automatic deep learning-based methods to detect these pathologies using SD-OCT scans. The convolutional neural network (CNN) from scratch we developed gave the best classification score with an accuracy higher than 99% on Duke dataset of OCT images.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2023 ","pages":"9966107"},"PeriodicalIF":7.6,"publicationDate":"2023-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10691890/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138478963","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Assessment of the Impact of Turbo Factor on Image Quality and Tissue Volumetrics in Brain Magnetic Resonance Imaging Using the Three-Dimensional T1-Weighted (3D T1W) Sequence. 利用三维t1加权(3D T1W)序列评估Turbo因子对脑磁共振成像图像质量和组织体积的影响。
IF 7.6 Q2 ENGINEERING, BIOMEDICAL Pub Date : 2023-11-15 eCollection Date: 2023-01-01 DOI: 10.1155/2023/6304219
Eric Naab Manson, Stephen Inkoom, Abdul Nashirudeen Mumuni, Issahaku Shirazu, Adolf Kofi Awua

Background: The 3D T1W turbo field echo sequence is a standard imaging method for acquiring high-contrast images of the brain. However, the contrast-to-noise ratio (CNR) can be affected by the turbo factor, which could affect the delineation and segmentation of various structures in the brain and may consequently lead to misdiagnosis. This study is aimed at evaluating the effect of the turbo factor on image quality and volumetric measurement reproducibility in brain magnetic resonance imaging (MRI).

Methods: Brain images of five healthy volunteers with no history of neurological diseases were acquired on a 1.5 T MRI scanner with varying turbo factors of 50, 100, 150, 200, and 225. The images were processed and analyzed with FreeSurfer. The influence of the TFE factor on image quality and reproducibility of brain volume measurements was investigated. Image quality metrics assessed included the signal-to-noise ratio (SNR) of white matter (WM), CNR between gray matter/white matter (GM/WM) and gray matter/cerebrospinal fluid (GM/CSF), and Euler number (EN). Moreover, structural brain volume measurements of WM, GM, and CSF were conducted.

Results: Turbo factor 200 produced the best SNR (median = 17.01) and GM/WM CNR (median = 2.29), but turbo factor 100 offered the most reproducible SNR (IQR = 2.72) and GM/WM CNR (IQR = 0.14). Turbo factor 50 had the worst and the least reproducible SNR, whereas turbo factor 225 had the worst and the least reproducible GM/WM CNR. Turbo factor 200 again had the best GM/CSF CNR but offered the least reproducible GM/CSF CNR. Turbo factor 225 had the best performance on EN (-21), while turbo factor 200 was next to the most reproducible turbo factor on EN (11). The results showed that turbo factor 200 had the least data acquisition time, in addition to superior performance on SNR, GM/WM CNR, GM/CSF CNR, and good reproducibility characteristics on EN. Both image quality metrics and volumetric measurements did not vary significantly (p > 0.05) with the range of turbo factors used in the study by one-way ANOVA analysis.

Conclusion: Since no significant differences were observed in the performance of the turbo factors in terms of image quality and volume of brain structure, turbo factor 200 with a 74% acquisition time reduction was found to be optimal for brain MR imaging at 1.5 T.

背景:3D T1W涡轮场回波序列是获取高对比度大脑图像的标准成像方法。然而,噪声对比比(CNR)会受到涡轮系数的影响,从而影响对大脑各种结构的描绘和分割,从而可能导致误诊。本研究旨在评估涡轮系数对脑磁共振成像(MRI)图像质量和体积测量再现性的影响。方法:对5名无神经系统疾病史的健康志愿者进行1.5 T磁共振成像(MRI)扫描,turbo因子分别为50、100、150、200和225。使用FreeSurfer对图像进行处理和分析。研究了TFE因子对脑容量测量图像质量和再现性的影响。评估的图像质量指标包括白质(WM)的信噪比(SNR)、灰质/白质(GM/WM)与灰质/脑脊液(GM/CSF)之间的CNR和欧拉数(EN)。此外,还进行了WM、GM和CSF的结构脑体积测量。结果:Turbo因子200产生了最好的信噪比(中位数= 17.01)和GM/WM的CNR(中位数= 2.29),而Turbo因子100产生了最好的再现信噪比(IQR = 2.72)和GM/WM的CNR (IQR = 0.14)。涡轮因子50具有最差和最低的可重复信噪比,而涡轮因子225具有最差和最低的可重复GM/WM信噪比。Turbo factor 200仍然具有最佳的GM/CSF CNR,但提供了最低的再现性GM/CSF CNR。涡轮因子225在EN(-21)上的表现最好,而涡轮因子200在EN(11)上的再现性仅次于涡轮因子225。结果表明,turbo因子200的数据采集时间最短,在信噪比、GM/WM、GM/CSF CNR上表现优异,在EN上具有良好的再现性。通过单因素方差分析,图像质量指标和体积测量值与研究中使用的涡轮因子范围没有显著差异(p > 0.05)。结论:由于turbo因子在图像质量和脑结构体积方面的性能没有显著差异,因此在1.5 T时,turbo因子200的采集时间减少74%,是脑MR成像的最佳选择。
{"title":"Assessment of the Impact of Turbo Factor on Image Quality and Tissue Volumetrics in Brain Magnetic Resonance Imaging Using the Three-Dimensional T1-Weighted (3D T1W) Sequence.","authors":"Eric Naab Manson, Stephen Inkoom, Abdul Nashirudeen Mumuni, Issahaku Shirazu, Adolf Kofi Awua","doi":"10.1155/2023/6304219","DOIUrl":"https://doi.org/10.1155/2023/6304219","url":null,"abstract":"<p><strong>Background: </strong>The 3D T1W turbo field echo sequence is a standard imaging method for acquiring high-contrast images of the brain. However, the contrast-to-noise ratio (CNR) can be affected by the turbo factor, which could affect the delineation and segmentation of various structures in the brain and may consequently lead to misdiagnosis. This study is aimed at evaluating the effect of the turbo factor on image quality and volumetric measurement reproducibility in brain magnetic resonance imaging (MRI).</p><p><strong>Methods: </strong>Brain images of five healthy volunteers with no history of neurological diseases were acquired on a 1.5 T MRI scanner with varying turbo factors of 50, 100, 150, 200, and 225. The images were processed and analyzed with FreeSurfer. The influence of the TFE factor on image quality and reproducibility of brain volume measurements was investigated. Image quality metrics assessed included the signal-to-noise ratio (SNR) of white matter (WM), CNR between gray matter/white matter (GM/WM) and gray matter/cerebrospinal fluid (GM/CSF), and Euler number (EN). Moreover, structural brain volume measurements of WM, GM, and CSF were conducted.</p><p><strong>Results: </strong>Turbo factor 200 produced the best SNR (median = 17.01) and GM/WM CNR (median = 2.29), but turbo factor 100 offered the most reproducible SNR (IQR = 2.72) and GM/WM CNR (IQR = 0.14). Turbo factor 50 had the worst and the least reproducible SNR, whereas turbo factor 225 had the worst and the least reproducible GM/WM CNR. Turbo factor 200 again had the best GM/CSF CNR but offered the least reproducible GM/CSF CNR. Turbo factor 225 had the best performance on EN (-21), while turbo factor 200 was next to the most reproducible turbo factor on EN (11). The results showed that turbo factor 200 had the least data acquisition time, in addition to superior performance on SNR, GM/WM CNR, GM/CSF CNR, and good reproducibility characteristics on EN. Both image quality metrics and volumetric measurements did not vary significantly (<i>p</i> > 0.05) with the range of turbo factors used in the study by one-way ANOVA analysis.</p><p><strong>Conclusion: </strong>Since no significant differences were observed in the performance of the turbo factors in terms of image quality and volume of brain structure, turbo factor 200 with a 74% acquisition time reduction was found to be optimal for brain MR imaging at 1.5 T.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2023 ","pages":"6304219"},"PeriodicalIF":7.6,"publicationDate":"2023-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10665095/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138463553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Assessing Predictive Ability of Dynamic Time Warping Functional Connectivity for ASD Classification. 评估ASD分类的动态时间扭曲函数连接性的预测能力。
IF 7.6 Q2 ENGINEERING, BIOMEDICAL Pub Date : 2023-10-25 eCollection Date: 2023-01-01 DOI: 10.1155/2023/8512461
Christopher Liu, Juanjuan Fan, Barbara Bailey, Ralph-Axel Müller, Annika Linke

Functional connectivity MRI (fcMRI) is a technique used to study the functional connectedness of distinct regions of the brain by measuring the temporal correlation between their blood oxygen level-dependent (BOLD) signals. fcMRI is typically measured with the Pearson correlation (PC), which assumes that there is no lag between time series. Dynamic time warping (DTW) is an alternative measure of similarity between time series that is robust to such time lags. We used PC fcMRI data and DTW fcMRI data as predictors in machine learning models for classifying autism spectrum disorder (ASD). When combined with dimension reduction techniques, such as principal component analysis, functional connectivity estimated with DTW showed greater predictive ability than functional connectivity estimated with PC. Our results suggest that DTW fcMRI can be a suitable alternative measure that may be characterizing fcMRI in a different, but complementary, way to PC fcMRI that is worth continued investigation. In studying different variants of cross validation (CV), our results suggest that, when it is necessary to tune model hyperparameters and assess model performance at the same time, a K-fold CV nested within leave-one-out CV may be a competitive contender in terms of performance and computational speed, especially when sample size is not large.

功能连接MRI(fcMRI)是一种用于通过测量大脑不同区域的血氧水平依赖性(BOLD)信号之间的时间相关性来研究其功能连接性的技术。fcMRI通常使用Pearson相关性(PC)进行测量,该相关性假设时间序列之间没有滞后。动态时间扭曲(DTW)是一种衡量时间序列之间相似性的替代方法,对这种时滞具有鲁棒性。我们使用PC fcMRI数据和DTW fcMRI数据作为机器学习模型中的预测因素,对自闭症谱系障碍(ASD)进行分类。当与主成分分析等降维技术相结合时,用DTW估计的功能连接性比用PC估计的功能连通性显示出更大的预测能力。我们的结果表明,DTW-fcMRI可以是一种合适的替代测量方法,可以以不同但互补的方式来表征fcMRI,值得继续研究的PC fcMRI方法。在研究交叉验证(CV)的不同变体时,我们的结果表明,当需要调整模型超参数并同时评估模型性能时,嵌套在保留一个CV中的K折叠CV在性能和计算速度方面可能是一个有竞争力的竞争者,尤其是在样本量不大的情况下。
{"title":"Assessing Predictive Ability of Dynamic Time Warping Functional Connectivity for ASD Classification.","authors":"Christopher Liu, Juanjuan Fan, Barbara Bailey, Ralph-Axel Müller, Annika Linke","doi":"10.1155/2023/8512461","DOIUrl":"10.1155/2023/8512461","url":null,"abstract":"<p><p>Functional connectivity MRI (fcMRI) is a technique used to study the functional connectedness of distinct regions of the brain by measuring the temporal correlation between their blood oxygen level-dependent (BOLD) signals. fcMRI is typically measured with the Pearson correlation (PC), which assumes that there is no lag between time series. Dynamic time warping (DTW) is an alternative measure of similarity between time series that is robust to such time lags. We used PC fcMRI data and DTW fcMRI data as predictors in machine learning models for classifying autism spectrum disorder (ASD). When combined with dimension reduction techniques, such as principal component analysis, functional connectivity estimated with DTW showed greater predictive ability than functional connectivity estimated with PC. Our results suggest that DTW fcMRI can be a suitable alternative measure that may be characterizing fcMRI in a different, but complementary, way to PC fcMRI that is worth continued investigation. In studying different variants of cross validation (CV), our results suggest that, when it is necessary to tune model hyperparameters and assess model performance at the same time, a <i>K</i>-fold CV nested within leave-one-out CV may be a competitive contender in terms of performance and computational speed, especially when sample size is not large.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2023 ","pages":"8512461"},"PeriodicalIF":7.6,"publicationDate":"2023-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10620025/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71427758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
International Journal of Biomedical Imaging
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1