首页 > 最新文献

International Journal of Biomedical Imaging最新文献

英文 中文
In Situ Immunofluorescence Imaging of Vital Human Pancreatic Tissue Using Fiber-Optic Microscopy. 利用光纤显微镜对重要的人体胰腺组织进行原位免疫荧光成像。
IF 7.6 Q2 ENGINEERING, BIOMEDICAL Pub Date : 2024-06-06 eCollection Date: 2024-01-01 DOI: 10.1155/2024/1397875
Sophia Ackermann, Maximilian Herold, Vincent Rohrbacher, Michael Schäfer, Marcell Tóth, Stefan Thomann, Thilo Hackert, Eduard Ryschich

Purpose: Surgical resection is the only curative option for pancreatic carcinoma, but disease-free and overall survival times after surgery are limited due to early tumor recurrence, most often originating from local microscopic tumor residues (R1 resection). The intraoperative identification of microscopic tumor residues within the resection margin in situ could improve surgical performance. The aim of this study was to evaluate the effectiveness of fiber-optic microscopy for detecting microscopic residues in vital pancreatic cancer tissues. Experimental Design. Fresh whole-mount human pancreatic tissues, histological tissue slides, cell culture, and chorioallantoic membrane xenografts were analyzed. Specimens were stained with selected fluorophore-conjugated antibodies and studied using conventional wide-field and self-designed multicolor fiber-optic fluorescence microscopy instruments.

Results: Whole-mount vital human tissues and xenografts were stained and imaged using an in situ immunofluorescence protocol. Fiber-optic microscopy enabled the detection of epitope-based fluorescence in vital whole-mount tissue using fluorophore-conjugated antibodies and enabled visualization of microvascular, epithelial, and malignant tumor cells. Among the selected antigen-antibody pairs, antibody clones WM59, AY13, and 9C4 were the most promising for fiber-optic imaging in human tissue samples and for endothelial, tumor and epithelial cell detection.

Conclusions: Fresh dissected whole-mount tissue can be stained using direct exposure to selected antibody clones. Several antibody clones were identified that provided excellent immunofluorescence imaging of labeled structures, such as endothelial, epithelial, or EGFR-expressing cells. The combination of in situ immunofluorescence staining and fiber-optic microscopy visualizes structures in vital tissues and could be proposed as an useful tool for the in situ identification of residual tumor mass in patients with a high operative risk for incomplete resection.

目的:手术切除是根治胰腺癌的唯一选择,但由于肿瘤早期复发,术后无病生存期和总生存期受到限制,而肿瘤早期复发多源于局部微小肿瘤残留(R1切除)。术中原位识别切除边缘内的微小肿瘤残留可提高手术效果。本研究旨在评估光纤显微镜检测重要胰腺癌组织中微小残留物的有效性。实验设计。对新鲜的整张人体胰腺组织、组织切片、细胞培养物和绒毛膜异种移植体进行分析。标本用选定的荧光团结合抗体染色,并使用传统的宽视野和自行设计的多色光纤荧光显微镜仪器进行研究:使用原位免疫荧光方案对整块重要人体组织和异种移植物进行染色和成像。光纤显微镜能利用荧光团结合的抗体检测活体整装组织中的表位荧光,并能观察到微血管、上皮细胞和恶性肿瘤细胞。在所选的抗原-抗体对中,抗体克隆 WM59、AY13 和 9C4 最有希望在人体组织样本中进行光纤成像,并用于内皮细胞、肿瘤细胞和上皮细胞的检测:结论:新鲜解剖的整块组织可直接暴露于选定的抗体克隆进行染色。结论:直接暴露于选定的抗体克隆可对新鲜的解剖全贴面组织进行染色,已确定的几个抗体克隆可对标记结构(如内皮细胞、上皮细胞或表皮生长因子受体表达细胞)进行出色的免疫荧光成像。原位免疫荧光染色与光纤显微镜相结合,可观察到重要组织中的结构,可作为一种有用的工具,用于在手术风险较高且切除不彻底的患者中原位识别残余肿瘤块。
{"title":"<i>In Situ</i> Immunofluorescence Imaging of Vital Human Pancreatic Tissue Using Fiber-Optic Microscopy.","authors":"Sophia Ackermann, Maximilian Herold, Vincent Rohrbacher, Michael Schäfer, Marcell Tóth, Stefan Thomann, Thilo Hackert, Eduard Ryschich","doi":"10.1155/2024/1397875","DOIUrl":"10.1155/2024/1397875","url":null,"abstract":"<p><strong>Purpose: </strong>Surgical resection is the only curative option for pancreatic carcinoma, but disease-free and overall survival times after surgery are limited due to early tumor recurrence, most often originating from local microscopic tumor residues (R1 resection). The intraoperative identification of microscopic tumor residues within the resection margin <i>in situ</i> could improve surgical performance. The aim of this study was to evaluate the effectiveness of fiber-optic microscopy for detecting microscopic residues in vital pancreatic cancer tissues. <i>Experimental Design</i>. Fresh whole-mount human pancreatic tissues, histological tissue slides, cell culture, and chorioallantoic membrane xenografts were analyzed. Specimens were stained with selected fluorophore-conjugated antibodies and studied using conventional wide-field and self-designed multicolor fiber-optic fluorescence microscopy instruments.</p><p><strong>Results: </strong>Whole-mount vital human tissues and xenografts were stained and imaged using an <i>in situ</i> immunofluorescence protocol. Fiber-optic microscopy enabled the detection of epitope-based fluorescence in vital whole-mount tissue using fluorophore-conjugated antibodies and enabled visualization of microvascular, epithelial, and malignant tumor cells. Among the selected antigen-antibody pairs, antibody clones WM59, AY13, and 9C4 were the most promising for fiber-optic imaging in human tissue samples and for endothelial, tumor and epithelial cell detection.</p><p><strong>Conclusions: </strong>Fresh dissected whole-mount tissue can be stained using direct exposure to selected antibody clones. Several antibody clones were identified that provided excellent immunofluorescence imaging of labeled structures, such as endothelial, epithelial, or EGFR-expressing cells. The combination of <i>in situ</i> immunofluorescence staining and fiber-optic microscopy visualizes structures in vital tissues and could be proposed as an useful tool for the <i>in situ</i> identification of residual tumor mass in patients with a high operative risk for incomplete resection.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2024 ","pages":"1397875"},"PeriodicalIF":7.6,"publicationDate":"2024-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11178408/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141332196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
COVID-19 Detection from Computed Tomography Images Using Slice Processing Techniques and a Modified Xception Classifier. 利用切片处理技术和改进的 Xception 分类器从计算机断层扫描图像中检测 COVID-19。
IF 7.6 Q2 ENGINEERING, BIOMEDICAL Pub Date : 2024-05-24 eCollection Date: 2024-01-01 DOI: 10.1155/2024/9962839
Kenan Morani, Esra Kaya Ayana, Dimitrios Kollias, Devrim Unay

This paper extends our previous method for COVID-19 diagnosis, proposing an enhanced solution for detecting COVID-19 from computed tomography (CT) images using a lean transfer learning-based model. To decrease model misclassifications, two key steps of image processing were employed. Firstly, the uppermost and lowermost slices were removed, preserving sixty percent of each patient's slices. Secondly, all slices underwent manual cropping to emphasize the lung areas. Subsequently, resized CT scans (224 × 224) were input into an Xception transfer learning model with a modified output. Both Xception's architecture and pretrained weights were leveraged in the method. A big and rigorously annotated database of CT images was used to verify the method. The number of patients/subjects in the dataset is more than 5000, and the number and shape of the slices in each CT scan varies greatly. Verification was made both on the validation partition and on the test partition of unseen images. Results on the COV19-CT database showcased not only improvement from our previous solution and the baseline but also comparable performance to the highest-achieving methods on the same dataset. Further validation studies could explore the scalability and adaptability of the developed methodologies across diverse healthcare settings and patient populations. Additionally, investigating the integration of advanced image processing techniques, such as automated region of interest detection and segmentation algorithms, could enhance the efficiency and accuracy of COVID-19 diagnosis.

本文扩展了之前的 COVID-19 诊断方法,提出了一种基于精益迁移学习模型的增强型解决方案,用于从计算机断层扫描(CT)图像中检测 COVID-19。为了减少模型分类错误,我们采用了两个关键的图像处理步骤。首先,去除最上方和最下方的切片,保留每位患者 60% 的切片。其次,对所有切片进行手动裁剪,以突出肺部区域。随后,将调整后的 CT 扫描图像(224 × 224)输入 Xception 转移学习模型,并修改输出结果。该方法利用了 Xception 的架构和预训练权重。为了验证该方法,我们使用了一个大型且经过严格注释的 CT 图像数据库。数据集中的患者/受试者数量超过 5000 人,且每张 CT 扫描图像的切片数量和形状差异很大。验证既在验证分区上进行,也在未见图像的测试分区上进行。在 COV19-CT 数据库上的结果表明,该方法不仅比我们以前的解决方案和基线方法有所改进,而且在同一数据集上的性能也可与成绩最好的方法媲美。进一步的验证研究可以探索所开发方法在不同医疗环境和患者群体中的可扩展性和适应性。此外,研究先进的图像处理技术(如自动兴趣区检测和分割算法)的整合也能提高 COVID-19 诊断的效率和准确性。
{"title":"COVID-19 Detection from Computed Tomography Images Using Slice Processing Techniques and a Modified Xception Classifier.","authors":"Kenan Morani, Esra Kaya Ayana, Dimitrios Kollias, Devrim Unay","doi":"10.1155/2024/9962839","DOIUrl":"10.1155/2024/9962839","url":null,"abstract":"<p><p>This paper extends our previous method for COVID-19 diagnosis, proposing an enhanced solution for detecting COVID-19 from computed tomography (CT) images using a lean transfer learning-based model. To decrease model misclassifications, two key steps of image processing were employed. Firstly, the uppermost and lowermost slices were removed, preserving sixty percent of each patient's slices. Secondly, all slices underwent manual cropping to emphasize the lung areas. Subsequently, resized CT scans (224 × 224) were input into an Xception transfer learning model with a modified output. Both Xception's architecture and pretrained weights were leveraged in the method. A big and rigorously annotated database of CT images was used to verify the method. The number of patients/subjects in the dataset is more than 5000, and the number and shape of the slices in each CT scan varies greatly. Verification was made both on the validation partition and on the test partition of unseen images. Results on the COV19-CT database showcased not only improvement from our previous solution and the baseline but also comparable performance to the highest-achieving methods on the same dataset. Further validation studies could explore the scalability and adaptability of the developed methodologies across diverse healthcare settings and patient populations. Additionally, investigating the integration of advanced image processing techniques, such as automated region of interest detection and segmentation algorithms, could enhance the efficiency and accuracy of COVID-19 diagnosis.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2024 ","pages":"9962839"},"PeriodicalIF":7.6,"publicationDate":"2024-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11178392/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141332156","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Swin Transformer and the Unet Architecture to Correct Motion Artifacts in Magnetic Resonance Image Reconstruction. 用斯温变换器和 Unet 架构纠正磁共振图像重建中的运动伪影
IF 7.6 Q2 ENGINEERING, BIOMEDICAL Pub Date : 2024-05-02 eCollection Date: 2024-01-01 DOI: 10.1155/2024/8972980
Md Biddut Hossain, Rupali Kiran Shinde, Shariar Md Imtiaz, F M Fahmid Hossain, Seok-Hee Jeon, Ki-Chul Kwon, Nam Kim

We present a deep learning-based method that corrects motion artifacts and thus accelerates data acquisition and reconstruction of magnetic resonance images. The novel model, the Motion Artifact Correction by Swin Network (MACS-Net), uses a Swin transformer layer as the fundamental block and the Unet architecture as the neural network backbone. We employ a hierarchical transformer with shifted windows to extract multiscale contextual features during encoding. A new dual upsampling technique is employed to enhance the spatial resolutions of feature maps in the Swin transformer-based decoder layer. A raw magnetic resonance imaging dataset is used for network training and testing; the data contain various motion artifacts with ground truth images of the same subjects. The results were compared to six state-of-the-art MRI image motion correction methods using two types of motions. When motions were brief (within 5 s), the method reduced the average normalized root mean square error (NRMSE) from 45.25% to 17.51%, increased the mean structural similarity index measure (SSIM) from 79.43% to 91.72%, and increased the peak signal-to-noise ratio (PSNR) from 18.24 to 26.57 dB. Similarly, when motions were extended from 5 to 10 s, our approach decreased the average NRMSE from 60.30% to 21.04%, improved the mean SSIM from 33.86% to 90.33%, and increased the PSNR from 15.64 to 24.99 dB. The anatomical structures of the corrected images and the motion-free brain data were similar.

我们提出了一种基于深度学习的方法,它能纠正运动伪影,从而加速磁共振图像的数据采集和重建。这个名为 "Swin 网络运动伪影校正"(MACS-Net)的新模型使用 Swin 变换器层作为基本模块,Unet 架构作为神经网络骨干。在编码过程中,我们采用带有移位窗口的分层变换器来提取多尺度上下文特征。在基于 Swin 变换器的解码层中,我们采用了一种新的双重上采样技术来提高特征图的空间分辨率。原始磁共振成像数据集用于网络训练和测试;数据包含各种运动伪影和相同受试者的地面实况图像。使用两种类型的运动,将结果与六种最先进的磁共振成像运动校正方法进行了比较。当运动时间较短时(5 秒内),该方法将平均归一化均方根误差(NRMSE)从 45.25% 降低到 17.51%,将平均结构相似性指数(SSIM)从 79.43% 提高到 91.72%,将峰值信噪比(PSNR)从 18.24 dB 提高到 26.57 dB。同样,当运动时间从 5 秒延长到 10 秒时,我们的方法将平均 NRMSE 从 60.30% 降低到 21.04%,将平均 SSIM 从 33.86% 提高到 90.33%,将 PSNR 从 15.64 dB 提高到 24.99 dB。校正图像和无运动大脑数据的解剖结构相似。
{"title":"Swin Transformer and the Unet Architecture to Correct Motion Artifacts in Magnetic Resonance Image Reconstruction.","authors":"Md Biddut Hossain, Rupali Kiran Shinde, Shariar Md Imtiaz, F M Fahmid Hossain, Seok-Hee Jeon, Ki-Chul Kwon, Nam Kim","doi":"10.1155/2024/8972980","DOIUrl":"10.1155/2024/8972980","url":null,"abstract":"<p><p>We present a deep learning-based method that corrects motion artifacts and thus accelerates data acquisition and reconstruction of magnetic resonance images. The novel model, the Motion Artifact Correction by Swin Network (MACS-Net), uses a Swin transformer layer as the fundamental block and the Unet architecture as the neural network backbone. We employ a hierarchical transformer with shifted windows to extract multiscale contextual features during encoding. A new dual upsampling technique is employed to enhance the spatial resolutions of feature maps in the Swin transformer-based decoder layer. A raw magnetic resonance imaging dataset is used for network training and testing; the data contain various motion artifacts with ground truth images of the same subjects. The results were compared to six state-of-the-art MRI image motion correction methods using two types of motions. When motions were brief (within 5 s), the method reduced the average normalized root mean square error (NRMSE) from 45.25% to 17.51%, increased the mean structural similarity index measure (SSIM) from 79.43% to 91.72%, and increased the peak signal-to-noise ratio (PSNR) from 18.24 to 26.57 dB. Similarly, when motions were extended from 5 to 10 s, our approach decreased the average NRMSE from 60.30% to 21.04%, improved the mean SSIM from 33.86% to 90.33%, and increased the PSNR from 15.64 to 24.99 dB. The anatomical structures of the corrected images and the motion-free brain data were similar.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2024 ","pages":"8972980"},"PeriodicalIF":7.6,"publicationDate":"2024-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11081754/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140899883","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ContourTL-Net: Contour-Based Transfer Learning Algorithm for Early-Stage Brain Tumor Detection. ContourTL-Net:基于轮廓的转移学习算法,用于早期脑肿瘤检测。
IF 7.6 Q2 ENGINEERING, BIOMEDICAL Pub Date : 2024-04-29 eCollection Date: 2024-01-01 DOI: 10.1155/2024/6347920
N I Md Ashafuddula, Rafiqul Islam

Brain tumors are critical neurological ailments caused by uncontrolled cell growth in the brain or skull, often leading to death. An increasing patient longevity rate requires prompt detection; however, the complexities of brain tissue make early diagnosis challenging. Hence, automated tools are necessary to aid healthcare professionals. This study is particularly aimed at improving the efficacy of computerized brain tumor detection in a clinical setting through a deep learning model. Hence, a novel thresholding-based MRI image segmentation approach with a transfer learning model based on contour (ContourTL-Net) is suggested to facilitate the clinical detection of brain malignancies at an initial phase. The model utilizes contour-based analysis, which is critical for object detection, precise segmentation, and capturing subtle variations in tumor morphology. The model employs a VGG-16 architecture priorly trained on the "ImageNet" collection for feature extraction and categorization. The model is designed to utilize its ten nontrainable and three trainable convolutional layers and three dropout layers. The proposed ContourTL-Net model is evaluated on two benchmark datasets in four ways, among which an unseen case is considered as the clinical aspect. Validating a deep learning model on unseen data is crucial to determine the model's generalization capability, domain adaptation, robustness, and real-world applicability. Here, the presented model's outcomes demonstrate a highly accurate classification of the unseen data, achieving a perfect sensitivity and negative predictive value (NPV) of 100%, 98.60% specificity, 99.12% precision, 99.56% F1-score, and 99.46% accuracy. Additionally, the outcomes of the suggested model are compared with state-of-the-art methodologies to further enhance its effectiveness. The proposed solution outperforms the existing solutions in both seen and unseen data, with the potential to significantly improve brain tumor detection efficiency and accuracy, leading to earlier diagnoses and improved patient outcomes.

脑肿瘤是一种严重的神经系统疾病,由大脑或头骨中不受控制的细胞生长引起,通常会导致死亡。随着患者寿命的延长,需要及时发现;然而,脑组织的复杂性使得早期诊断具有挑战性。因此,需要自动化工具来帮助医护人员。本研究尤其旨在通过深度学习模型提高临床环境中计算机化脑肿瘤检测的效率。因此,本研究提出了一种新型的基于阈值的磁共振成像图像分割方法和基于轮廓的迁移学习模型(ContourTL-Net),以促进脑部恶性肿瘤的初期临床检测。该模型利用基于轮廓的分析,这对物体检测、精确分割和捕捉肿瘤形态的细微变化至关重要。该模型采用 VGG-16 架构,事先在 "ImageNet "集合上进行了特征提取和分类训练。该模型旨在利用其 10 个不可训练卷积层、3 个可训练卷积层和 3 个剔除层。所提出的 ContourTL-Net 模型在两个基准数据集上以四种方式进行了评估,其中未见病例被视为临床方面。在未见数据上验证深度学习模型对于确定模型的泛化能力、领域适应性、鲁棒性和实际应用性至关重要。在这里,所介绍模型的结果表明,对未见数据的分类非常准确,灵敏度和阴性预测值(NPV)均为 100%,特异性为 98.60%,精确度为 99.12%,F1 分数为 99.56%,准确率为 99.46%。此外,还将建议模型的结果与最先进的方法进行了比较,以进一步提高其有效性。建议的解决方案在可见数据和未见数据方面都优于现有解决方案,有望显著提高脑肿瘤检测效率和准确性,从而提早诊断并改善患者预后。
{"title":"ContourTL-Net: Contour-Based Transfer Learning Algorithm for Early-Stage Brain Tumor Detection.","authors":"N I Md Ashafuddula, Rafiqul Islam","doi":"10.1155/2024/6347920","DOIUrl":"10.1155/2024/6347920","url":null,"abstract":"<p><p>Brain tumors are critical neurological ailments caused by uncontrolled cell growth in the brain or skull, often leading to death. An increasing patient longevity rate requires prompt detection; however, the complexities of brain tissue make early diagnosis challenging. Hence, automated tools are necessary to aid healthcare professionals. This study is particularly aimed at improving the efficacy of computerized brain tumor detection in a clinical setting through a deep learning model. Hence, a novel thresholding-based MRI image segmentation approach with a transfer learning model based on contour (ContourTL-Net) is suggested to facilitate the clinical detection of brain malignancies at an initial phase. The model utilizes contour-based analysis, which is critical for object detection, precise segmentation, and capturing subtle variations in tumor morphology. The model employs a VGG-16 architecture priorly trained on the \"ImageNet\" collection for feature extraction and categorization. The model is designed to utilize its ten nontrainable and three trainable convolutional layers and three dropout layers. The proposed ContourTL-Net model is evaluated on two benchmark datasets in four ways, among which an unseen case is considered as the clinical aspect. Validating a deep learning model on unseen data is crucial to determine the model's generalization capability, domain adaptation, robustness, and real-world applicability. Here, the presented model's outcomes demonstrate a highly accurate classification of the unseen data, achieving a perfect sensitivity and negative predictive value (NPV) of 100%, 98.60% specificity, 99.12% precision, 99.56% <i>F</i>1-score, and 99.46% accuracy. Additionally, the outcomes of the suggested model are compared with state-of-the-art methodologies to further enhance its effectiveness. The proposed solution outperforms the existing solutions in both seen and unseen data, with the potential to significantly improve brain tumor detection efficiency and accuracy, leading to earlier diagnoses and improved patient outcomes.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2024 ","pages":"6347920"},"PeriodicalIF":7.6,"publicationDate":"2024-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11074715/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140877608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Deep Learning Approach to Classify Fabry Cardiomyopathy from Hypertrophic Cardiomyopathy Using Cine Imaging on Cardiac Magnetic Resonance. 利用心脏磁共振成像技术对法布里心肌病和肥厚型心肌病进行分类的深度学习方法。
IF 7.6 Q2 ENGINEERING, BIOMEDICAL Pub Date : 2024-04-26 eCollection Date: 2024-01-01 DOI: 10.1155/2024/6114826
Wei-Wen Chen, Ling Kuo, Yi-Xun Lin, Wen-Chung Yu, Chien-Chao Tseng, Yenn-Jiang Lin, Ching-Chun Huang, Shih-Lin Chang, Jacky Chung-Hao Wu, Chun-Ku Chen, Ching-Yao Weng, Siwa Chan, Wei-Wen Lin, Yu-Cheng Hsieh, Ming-Chih Lin, Yun-Ching Fu, Tsung Chen, Shih-Ann Chen, Henry Horng-Shing Lu

A challenge in accurately identifying and classifying left ventricular hypertrophy (LVH) is distinguishing it from hypertrophic cardiomyopathy (HCM) and Fabry disease. The reliance on imaging techniques often requires the expertise of multiple specialists, including cardiologists, radiologists, and geneticists. This variability in the interpretation and classification of LVH leads to inconsistent diagnoses. LVH, HCM, and Fabry cardiomyopathy can be differentiated using T1 mapping on cardiac magnetic resonance imaging (MRI). However, differentiation between HCM and Fabry cardiomyopathy using echocardiography or MRI cine images is challenging for cardiologists. Our proposed system named the MRI short-axis view left ventricular hypertrophy classifier (MSLVHC) is a high-accuracy standardized imaging classification model developed using AI and trained on MRI short-axis (SAX) view cine images to distinguish between HCM and Fabry disease. The model achieved impressive performance, with an F1-score of 0.846, an accuracy of 0.909, and an AUC of 0.914 when tested on the Taipei Veterans General Hospital (TVGH) dataset. Additionally, a single-blinding study and external testing using data from the Taichung Veterans General Hospital (TCVGH) demonstrated the reliability and effectiveness of the model, achieving an F1-score of 0.727, an accuracy of 0.806, and an AUC of 0.918, demonstrating the model's reliability and usefulness. This AI model holds promise as a valuable tool for assisting specialists in diagnosing LVH diseases.

准确识别和分类左心室肥厚(LVH)的一个难题是将其与肥厚型心肌病(HCM)和法布里病区分开来。对成像技术的依赖往往需要多位专家的专业知识,包括心脏病专家、放射科专家和遗传学家。对 LVH 的解释和分类存在差异,导致诊断结果不一致。左心室肥厚、HCM 和法布里心肌病可通过心脏磁共振成像(MRI)的 T1 映射加以区分。然而,对于心脏病专家来说,使用超声心动图或核磁共振成像电影图像区分 HCM 和法布里心肌病具有挑战性。我们提出的核磁共振短轴左心室肥厚分类器(MSLVHC)系统是一个利用人工智能开发的高准确度标准化成像分类模型,并在核磁共振短轴(SAX)视图电影图像上进行训练,以区分 HCM 和法布里病。在台北荣民总医院(TVGH)数据集上进行测试时,该模型取得了令人印象深刻的性能,F1 分数为 0.846,准确率为 0.909,AUC 为 0.914。此外,利用台中荣民总医院(TCVGH)的数据进行的单盲研究和外部测试也证明了该模型的可靠性和有效性,其F1分数为0.727,准确率为0.806,AUC为0.918,证明了该模型的可靠性和实用性。该人工智能模型有望成为协助专家诊断左心室肥大疾病的重要工具。
{"title":"A Deep Learning Approach to Classify Fabry Cardiomyopathy from Hypertrophic Cardiomyopathy Using Cine Imaging on Cardiac Magnetic Resonance.","authors":"Wei-Wen Chen, Ling Kuo, Yi-Xun Lin, Wen-Chung Yu, Chien-Chao Tseng, Yenn-Jiang Lin, Ching-Chun Huang, Shih-Lin Chang, Jacky Chung-Hao Wu, Chun-Ku Chen, Ching-Yao Weng, Siwa Chan, Wei-Wen Lin, Yu-Cheng Hsieh, Ming-Chih Lin, Yun-Ching Fu, Tsung Chen, Shih-Ann Chen, Henry Horng-Shing Lu","doi":"10.1155/2024/6114826","DOIUrl":"https://doi.org/10.1155/2024/6114826","url":null,"abstract":"<p><p>A challenge in accurately identifying and classifying left ventricular hypertrophy (LVH) is distinguishing it from hypertrophic cardiomyopathy (HCM) and Fabry disease. The reliance on imaging techniques often requires the expertise of multiple specialists, including cardiologists, radiologists, and geneticists. This variability in the interpretation and classification of LVH leads to inconsistent diagnoses. LVH, HCM, and Fabry cardiomyopathy can be differentiated using T1 mapping on cardiac magnetic resonance imaging (MRI). However, differentiation between HCM and Fabry cardiomyopathy using echocardiography or MRI cine images is challenging for cardiologists. Our proposed system named the MRI short-axis view left ventricular hypertrophy classifier (MSLVHC) is a high-accuracy standardized imaging classification model developed using AI and trained on MRI short-axis (SAX) view cine images to distinguish between HCM and Fabry disease. The model achieved impressive performance, with an <i>F</i>1-score of 0.846, an accuracy of 0.909, and an AUC of 0.914 when tested on the Taipei Veterans General Hospital (TVGH) dataset. Additionally, a single-blinding study and external testing using data from the Taichung Veterans General Hospital (TCVGH) demonstrated the reliability and effectiveness of the model, achieving an <i>F</i>1-score of 0.727, an accuracy of 0.806, and an AUC of 0.918, demonstrating the model's reliability and usefulness. This AI model holds promise as a valuable tool for assisting specialists in diagnosing LVH diseases.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2024 ","pages":"6114826"},"PeriodicalIF":7.6,"publicationDate":"2024-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11068448/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140867764","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhanced Myocardial Tissue Visualization: A Comparative Cardiovascular Magnetic Resonance Study of Gradient-Spin Echo-STIR and Conventional STIR Imaging. 增强心肌组织可视化:梯度旋转 Echo-STIR 和传统 STIR 成像的心血管磁共振对比研究。
IF 7.6 Q2 ENGINEERING, BIOMEDICAL Pub Date : 2024-04-01 eCollection Date: 2024-01-01 DOI: 10.1155/2024/8456669
Sadegh Dehghani, Shapoor Shirani, Elahe Jazayeri Gharebagh

Purpose: This study is aimed at evaluating the efficacy of the gradient-spin echo- (GraSE-) based short tau inversion recovery (STIR) sequence (GraSE-STIR) in cardiovascular magnetic resonance (CMR) imaging compared to the conventional turbo spin echo- (TSE-) based STIR sequence, specifically focusing on image quality, specific absorption rate (SAR), and image acquisition time.

Methods: In a prospective study, we examined forty-four normal volunteers and seventeen patients referred for CMR imaging using conventional STIR and GraSE-STIR techniques. Signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), image quality, T2 signal intensity (SI) ratio, SAR, and image acquisition time were compared between both sequences.

Results: GraSE-STIR showed significant improvements in image quality (4.15 ± 0.8 vs. 3.34 ± 0.9, p = 0.024) and cardiac motion artifact reduction (7 vs. 18 out of 53, p = 0.038) compared to conventional STIR. Furthermore, the acquisition time (27.17 ± 3.53 vs. 36.9 ± 4.08 seconds, p = 0.041) and the local torso SAR (<13% vs. <17%, p = 0.047) were significantly lower for GraSE-STIR compared to conventional STIR in short-axis plan. However, no significant differences were shown in T2 SI ratio (p = 0.141), SNR (p = 0.093), CNR (p = 0.068), and SAR (p = 0.071) between these two sequences.

Conclusions: GraSE-STIR offers notable advantages over conventional STIR sequence, with improved image quality, reduced motion artifacts, and shorter acquisition times. These findings highlight the potential of GraSE-STIR as a valuable technique for routine clinical CMR imaging.

目的:本研究旨在评估基于梯度自旋回波(GraSE)的短头绪反转恢复(STIR)序列(GraSE-STIR)与基于传统涡轮自旋回波(TSE)的 STIR 序列相比在心血管磁共振(CMR)成像中的功效,特别关注图像质量、比吸收率(SAR)和图像采集时间:在一项前瞻性研究中,我们使用传统 STIR 和 GraSE-STIR 技术对 44 名正常志愿者和 17 名转诊为 CMR 成像的患者进行了检查。比较了两种序列的信噪比(SNR)、对比度-信噪比(CNR)、图像质量、T2 信号强度(SI)比、SAR 和图像采集时间:与传统 STIR 相比,GraSE-STIR 在图像质量(4.15 ± 0.8 vs. 3.34 ± 0.9,p = 0.024)和心脏运动伪影减少(53 例中有 7 例 vs. 18 例,p = 0.038)方面有明显改善。此外,在短轴平面上,GraSE-STIR 的采集时间(27.17 ± 3.53 对 36.9 ± 4.08 秒,p = 0.041)和局部躯干 SAR(p = 0.047)明显低于传统 STIR。然而,这两种序列在 T2 SI 比值(p = 0.141)、信噪比(p = 0.093)、CNR(p = 0.068)和 SAR(p = 0.071)方面没有明显差异:结论:与传统的 STIR 序列相比,GraSE-STIR 具有明显的优势,图像质量更高,运动伪影更少,采集时间更短。这些发现凸显了 GraSE-STIR 作为常规临床 CMR 成像的重要技术的潜力。
{"title":"Enhanced Myocardial Tissue Visualization: A Comparative Cardiovascular Magnetic Resonance Study of Gradient-Spin Echo-STIR and Conventional STIR Imaging.","authors":"Sadegh Dehghani, Shapoor Shirani, Elahe Jazayeri Gharebagh","doi":"10.1155/2024/8456669","DOIUrl":"https://doi.org/10.1155/2024/8456669","url":null,"abstract":"<p><strong>Purpose: </strong>This study is aimed at evaluating the efficacy of the gradient-spin echo- (GraSE-) based short tau inversion recovery (STIR) sequence (GraSE-STIR) in cardiovascular magnetic resonance (CMR) imaging compared to the conventional turbo spin echo- (TSE-) based STIR sequence, specifically focusing on image quality, specific absorption rate (SAR), and image acquisition time.</p><p><strong>Methods: </strong>In a prospective study, we examined forty-four normal volunteers and seventeen patients referred for CMR imaging using conventional STIR and GraSE-STIR techniques. Signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), image quality, <i>T</i><sub>2</sub> signal intensity (SI) ratio, SAR, and image acquisition time were compared between both sequences.</p><p><strong>Results: </strong>GraSE-STIR showed significant improvements in image quality (4.15 ± 0.8 vs. 3.34 ± 0.9, <i>p</i> = 0.024) and cardiac motion artifact reduction (7 vs. 18 out of 53, <i>p</i> = 0.038) compared to conventional STIR. Furthermore, the acquisition time (27.17 ± 3.53 vs. 36.9 ± 4.08 seconds, <i>p</i> = 0.041) and the local torso SAR (<13% vs. <17%, <i>p</i> = 0.047) were significantly lower for GraSE-STIR compared to conventional STIR in short-axis plan. However, no significant differences were shown in <i>T</i><sub>2</sub> SI ratio (<i>p</i> = 0.141), SNR (<i>p</i> = 0.093), CNR (<i>p</i> = 0.068), and SAR (<i>p</i> = 0.071) between these two sequences.</p><p><strong>Conclusions: </strong>GraSE-STIR offers notable advantages over conventional STIR sequence, with improved image quality, reduced motion artifacts, and shorter acquisition times. These findings highlight the potential of GraSE-STIR as a valuable technique for routine clinical CMR imaging.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2024 ","pages":"8456669"},"PeriodicalIF":7.6,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11001468/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140871905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Detecting MRI-Invisible Prostate Cancers Using a Weakly Supervised Deep Learning Model. 利用弱监督深度学习模型检测核磁共振成像看不见的前列腺癌
IF 7.6 Q2 ENGINEERING, BIOMEDICAL Pub Date : 2024-03-19 eCollection Date: 2024-01-01 DOI: 10.1155/2024/2741986
Yao Zheng, Jingliang Zhang, Dong Huang, Xiaoshuo Hao, Weijun Qin, Yang Liu

Background: MRI is an important tool for accurate detection and targeted biopsy of prostate lesions. However, the imaging appearances of some prostate cancers are similar to those of the surrounding normal tissue on MRI, which are referred to as MRI-invisible prostate cancers (MIPCas). The detection of MIPCas remains challenging and requires extensive systematic biopsy for identification. In this study, we developed a weakly supervised UNet (WSUNet) to detect MIPCas.

Methods: The study included 777 patients (training set: 600; testing set: 177), all of them underwent comprehensive prostate biopsies using an MRI-ultrasound fusion system. MIPCas were identified in MRI based on the Gleason grade (≥7) from known systematic biopsy results.

Results: The WSUNet model underwent validation through systematic biopsy in the testing set with an AUC of 0.764 (95% CI: 0.728-0.798). Furthermore, WSUNet exhibited a statistically significant precision improvement of 91.3% (p < 0.01) over conventional systematic biopsy methods in the testing set. This improvement resulted in a substantial 47.6% (p < 0.01) decrease in unnecessary biopsy needles, while maintaining the same number of positively identified cores as in the original systematic biopsy.

Conclusions: In conclusion, the proposed WSUNet could effectively detect MIPCas, thereby reducing unnecessary biopsies.

背景:磁共振成像是准确检测前列腺病变并进行有针对性活检的重要工具。然而,一些前列腺癌在核磁共振成像上的表现与周围正常组织相似,被称为核磁共振成像不可见前列腺癌(MIPCas)。MIPCas 的检测仍具有挑战性,需要进行广泛的系统活检才能识别。在这项研究中,我们开发了一种弱监督前列腺癌检测网络(WSUNet)来检测前列腺癌:研究纳入了 777 例患者(训练集:600 例;测试集:177 例),所有患者均使用核磁共振成像-超声波融合系统进行了全面的前列腺活检。根据已知系统活检结果中的格里森分级(≥7级)在核磁共振成像中识别出MIPCas:WSUNet模型通过测试集中的系统活检进行了验证,其AUC为0.764(95% CI:0.728-0.798)。此外,在测试集中,WSUNet 比传统的系统活检方法在统计学上显著提高了 91.3%(p < 0.01)。这一改进使不必要的活检针大幅减少了 47.6% (p < 0.01),同时保持了与原始系统性活检相同的阳性鉴定核心数量:总之,拟议的 WSUNet 能有效检测 MIPCas,从而减少不必要的活检。
{"title":"Detecting MRI-Invisible Prostate Cancers Using a Weakly Supervised Deep Learning Model.","authors":"Yao Zheng, Jingliang Zhang, Dong Huang, Xiaoshuo Hao, Weijun Qin, Yang Liu","doi":"10.1155/2024/2741986","DOIUrl":"10.1155/2024/2741986","url":null,"abstract":"<p><strong>Background: </strong>MRI is an important tool for accurate detection and targeted biopsy of prostate lesions. However, the imaging appearances of some prostate cancers are similar to those of the surrounding normal tissue on MRI, which are referred to as MRI-invisible prostate cancers (MIPCas). The detection of MIPCas remains challenging and requires extensive systematic biopsy for identification. In this study, we developed a weakly supervised UNet (WSUNet) to detect MIPCas.</p><p><strong>Methods: </strong>The study included 777 patients (training set: 600; testing set: 177), all of them underwent comprehensive prostate biopsies using an MRI-ultrasound fusion system. MIPCas were identified in MRI based on the Gleason grade (≥7) from known systematic biopsy results.</p><p><strong>Results: </strong>The WSUNet model underwent validation through systematic biopsy in the testing set with an AUC of 0.764 (95% CI: 0.728-0.798). Furthermore, WSUNet exhibited a statistically significant precision improvement of 91.3% (<i>p</i> < 0.01) over conventional systematic biopsy methods in the testing set. This improvement resulted in a substantial 47.6% (<i>p</i> < 0.01) decrease in unnecessary biopsy needles, while maintaining the same number of positively identified cores as in the original systematic biopsy.</p><p><strong>Conclusions: </strong>In conclusion, the proposed WSUNet could effectively detect MIPCas, thereby reducing unnecessary biopsies.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2024 ","pages":"2741986"},"PeriodicalIF":7.6,"publicationDate":"2024-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10965281/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140294947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Empowering Radiographers: A Call for Integrated AI Training in University Curricula. 增强放射技师的能力:呼吁在大学课程中纳入人工智能培训。
IF 7.6 Q2 ENGINEERING, BIOMEDICAL Pub Date : 2024-03-08 eCollection Date: 2024-01-01 DOI: 10.1155/2024/7001343
Mohammad A Rawashdeh, Sara Almazrouei, Maha Zaitoun, Praveen Kumar, Charbel Saade

Background: Artificial intelligence (AI) applications are rapidly advancing in the field of medical imaging. This study is aimed at investigating the perception and knowledge of radiographers towards artificial intelligence.

Methods: An online survey employing Google Forms consisting of 20 questions regarding the radiographers' perception of AI. The questionnaire was divided into two parts. The first part consisted of demographic information as well as whether the participants think AI should be part of medical training, their previous knowledge of the technologies used in AI, and whether they prefer to receive training on AI. The second part of the questionnaire consisted of two fields. The first one consisted of 16 questions regarding radiographers' perception of AI applications in radiology. Descriptive analysis and logistic regression analysis were used to evaluate the effect of gender on the items of the questionnaire.

Results: Familiarity with AI was low, with only 52 out of 100 respondents (52%) reporting good familiarity with AI. Many participants considered AI useful in the medical field (74%). The findings of the study demonstrate that nearly most of the participants (98%) believed that AI should be integrated into university education, with 87% of the respondents preferring to receive training on AI, with some already having prior knowledge of AI used in technologies. The logistic regression analysis indicated a significant association between male gender and experience within the range of 23-27 years with the degree of familiarity with AI technology, exhibiting respective odds ratios of 1.89 (COR = 1.89) and 1.87 (COR = 1.87).

Conclusions: This study suggests that medical practices have a favorable attitude towards AI in the radiology field. Most participants surveyed believed that AI should be part of radiography education. AI training programs for undergraduate and postgraduate radiographers may be necessary to prepare them for AI tools in radiology development.

背景:人工智能(AI)的应用正在医学影像领域迅速发展。本研究旨在调查放射技师对人工智能的看法和知识:方法:采用谷歌表格进行在线调查,其中包括 20 个有关放射技师对人工智能认知的问题。问卷分为两部分。第一部分包括人口统计学信息、参与者是否认为人工智能应成为医学培训的一部分、他们以前对人工智能所用技术的了解以及他们是否愿意接受人工智能培训。问卷的第二部分包括两个栏目。第一部分包括16个问题,涉及放射技师对人工智能在放射学中应用的看法。我们采用了描述性分析和逻辑回归分析来评估性别对问卷项目的影响:对人工智能的熟悉程度很低,100 名受访者中只有 52 人(52%)表示对人工智能非常熟悉。许多参与者认为人工智能在医疗领域很有用(74%)。研究结果表明,几乎大多数参与者(98%)都认为应将人工智能纳入大学教育,其中 87% 的受访者倾向于接受人工智能培训,部分受访者已经对技术中使用的人工智能有所了解。逻辑回归分析表明,男性性别和 23-27 年的工作经验与对人工智能技术的熟悉程度之间存在显著关联,各自的几率比为 1.89(COR = 1.89)和 1.87(COR = 1.87):本研究表明,医疗机构对放射学领域的人工智能持积极态度。大多数受访者认为,人工智能应成为放射学教育的一部分。可能有必要为本科生和研究生放射技师提供人工智能培训课程,使他们为放射学发展中的人工智能工具做好准备。
{"title":"Empowering Radiographers: A Call for Integrated AI Training in University Curricula.","authors":"Mohammad A Rawashdeh, Sara Almazrouei, Maha Zaitoun, Praveen Kumar, Charbel Saade","doi":"10.1155/2024/7001343","DOIUrl":"10.1155/2024/7001343","url":null,"abstract":"<p><strong>Background: </strong>Artificial intelligence (AI) applications are rapidly advancing in the field of medical imaging. This study is aimed at investigating the perception and knowledge of radiographers towards artificial intelligence.</p><p><strong>Methods: </strong>An online survey employing Google Forms consisting of 20 questions regarding the radiographers' perception of AI. The questionnaire was divided into two parts. The first part consisted of demographic information as well as whether the participants think AI should be part of medical training, their previous knowledge of the technologies used in AI, and whether they prefer to receive training on AI. The second part of the questionnaire consisted of two fields. The first one consisted of 16 questions regarding radiographers' perception of AI applications in radiology. Descriptive analysis and logistic regression analysis were used to evaluate the effect of gender on the items of the questionnaire.</p><p><strong>Results: </strong>Familiarity with AI was low, with only 52 out of 100 respondents (52%) reporting good familiarity with AI. Many participants considered AI useful in the medical field (74%). The findings of the study demonstrate that nearly most of the participants (98%) believed that AI should be integrated into university education, with 87% of the respondents preferring to receive training on AI, with some already having prior knowledge of AI used in technologies. The logistic regression analysis indicated a significant association between male gender and experience within the range of 23-27 years with the degree of familiarity with AI technology, exhibiting respective odds ratios of 1.89 (COR = 1.89) and 1.87 (COR = 1.87).</p><p><strong>Conclusions: </strong>This study suggests that medical practices have a favorable attitude towards AI in the radiology field. Most participants surveyed believed that AI should be part of radiography education. AI training programs for undergraduate and postgraduate radiographers may be necessary to prepare them for AI tools in radiology development.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2024 ","pages":"7001343"},"PeriodicalIF":7.6,"publicationDate":"2024-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10942819/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140144318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Facile Conversion and Optimization of Structured Illumination Image Reconstruction Code into the GPU Environment. 将结构光照图像重构代码便捷地转换和优化到 GPU 环境中。
IF 7.6 Q2 ENGINEERING, BIOMEDICAL Pub Date : 2024-02-28 eCollection Date: 2024-01-01 DOI: 10.1155/2024/8862387
Kwangsung Oh, Piero R Bianco

Superresolution, structured illumination microscopy (SIM) is an ideal modality for imaging live cells due to its relatively high speed and low photon-induced damage to the cells. The rate-limiting step in observing a superresolution image in SIM is often the reconstruction speed of the algorithm used to form a single image from as many as nine raw images. Reconstruction algorithms impose a significant computing burden due to an intricate workflow and a large number of often complex calculations to produce the final image. Further adding to the computing burden is that the code, even within the MATLAB environment, can be inefficiently written by microscopists who are noncomputer science researchers. In addition, they do not take into consideration the processing power of the graphics processing unit (GPU) of the computer. To address these issues, we present simple but efficient approaches to first revise MATLAB code, followed by conversion to GPU-optimized code. When combined with cost-effective, high-performance GPU-enabled computers, a 4- to 500-fold improvement in algorithm execution speed is observed as shown for the image denoising Hessian-SIM algorithm. Importantly, the improved algorithm produces images identical in quality to the original.

超分辨率结构照明显微镜(SIM)是一种理想的活细胞成像模式,因为它速度相对较快,对细胞造成的光子损伤较小。在 SIM 中观察超分辨率图像的速度极限通常是用于从多达九幅原始图像中生成单幅图像的算法的重建速度。重建算法需要复杂的工作流程和大量复杂的计算才能生成最终图像,这给计算带来了巨大的负担。进一步加重计算负担的是,即使是在 MATLAB 环境中,非计算机科学研究人员的显微镜专家编写的代码也可能效率低下。此外,他们也没有考虑到计算机图形处理器(GPU)的处理能力。为了解决这些问题,我们提出了简单而高效的方法,首先修改 MATLAB 代码,然后转换为 GPU 优化代码。如图像去噪 Hessian-SIM 算法所示,当与具有成本效益的高性能 GPU 计算机结合使用时,算法执行速度可提高 4 到 500 倍。重要的是,改进后的算法生成的图像质量与原始图像相同。
{"title":"Facile Conversion and Optimization of Structured Illumination Image Reconstruction Code into the GPU Environment.","authors":"Kwangsung Oh, Piero R Bianco","doi":"10.1155/2024/8862387","DOIUrl":"10.1155/2024/8862387","url":null,"abstract":"<p><p>Superresolution, structured illumination microscopy (SIM) is an ideal modality for imaging live cells due to its relatively high speed and low photon-induced damage to the cells. The rate-limiting step in observing a superresolution image in SIM is often the reconstruction speed of the algorithm used to form a single image from as many as nine raw images. Reconstruction algorithms impose a significant computing burden due to an intricate workflow and a large number of often complex calculations to produce the final image. Further adding to the computing burden is that the code, even within the MATLAB environment, can be inefficiently written by microscopists who are noncomputer science researchers. In addition, they do not take into consideration the processing power of the graphics processing unit (GPU) of the computer. To address these issues, we present simple but efficient approaches to first revise MATLAB code, followed by conversion to GPU-optimized code. When combined with cost-effective, high-performance GPU-enabled computers, a 4- to 500-fold improvement in algorithm execution speed is observed as shown for the image denoising Hessian-SIM algorithm. Importantly, the improved algorithm produces images identical in quality to the original.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2024 ","pages":"8862387"},"PeriodicalIF":7.6,"publicationDate":"2024-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10917484/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140050652","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
White Matter Fiber Tracking Method with Adaptive Correction of Tracking Direction. 自适应修正追踪方向的白质纤维追踪法
IF 7.6 Q2 ENGINEERING, BIOMEDICAL Pub Date : 2024-02-05 eCollection Date: 2024-01-01 DOI: 10.1155/2024/4102461
Qian Zheng, Kefu Guo, Yinghui Meng, Jiaofen Nan, Lin Xu

Background: The deterministic fiber tracking method has the advantage of high computational efficiency and good repeatability, making it suitable for the noninvasive estimation of brain structural connectivity in clinical fields. To address the issue of the current classical deterministic method tending to deviate in the tracking direction in the region of crossing fiber region, in this paper, we propose an adaptive correction-based deterministic white matter fiber tracking method, named FTACTD.

Methods: The proposed FTACTD method can accurately track white matter fibers by adaptively adjusting the deflection direction strategy based on the tensor matrix and the input fiber direction of adjacent voxels. The degree of correction direction changes adaptively according to the shape of the diffusion tensor, mimicking the actual tracking deflection angle and direction. Furthermore, both forward and reverse tracking techniques are employed to track the entire fiber. The effectiveness of the proposed method is validated and quantified using both simulated and real brain datasets. Various indicators such as invalid bundles (IB), valid bundles (VB), invalid connections (IC), no connections (NC), and valid connections (VC) are utilized to assess the performance of the proposed method on simulated data and real diffusion-weighted imaging (DWI) data.

Results: The experimental results of the simulated data show that the FTACTD method tracks outperform existing methods, achieving the highest number of VB with a total of 13 bundles. Additionally, it identifies the least number of incorrect fiber bundles, with only 32 bundles identified as wrong. Compared to the FACT method, the FTACTD method reduces the number of NC by 36.38%. In terms of VC, the FTACTD method surpasses even the best performing SD_Stream method among deterministic methods by 1.64%. Extensive in vivo experiments demonstrate the superiority of the proposed method in terms of tracking more accurate and complete fiber paths, resulting in improved continuity.

Conclusion: The FTACTD method proposed in this study indicates superior tracking results and provides a methodological basis for the investigating, diagnosis, and treatment of brain disorders associated with white matter fiber deficits and abnormalities.

背景:确定性纤维追踪方法具有计算效率高、重复性好等优点,适用于临床领域对大脑结构连接性的无创估计。针对目前经典确定性方法在交叉纤维区域的追踪方向容易出现偏差的问题,本文提出了一种基于自适应校正的确定性白质纤维追踪方法,命名为 FTACTD:方法:本文提出的 FTACTD 方法可以根据张量矩阵和相邻体素的输入纤维方向自适应地调整偏转方向策略,从而准确地跟踪白质纤维。校正方向的程度根据扩散张量的形状自适应变化,模仿实际的追踪偏转角度和方向。此外,还采用了正向和反向跟踪技术来跟踪整个纤维。利用模拟和真实的大脑数据集对所提出方法的有效性进行了验证和量化。利用各种指标,如无效束(IB)、有效束(VB)、无效连接(IC)、无连接(NC)和有效连接(VC),来评估拟议方法在模拟数据和真实扩散加权成像(DWI)数据上的性能:模拟数据的实验结果表明,FTACTD 方法的轨迹优于现有方法,获得了最多的 VB(共 13 个束)。此外,该方法识别出的错误光纤束数量最少,仅有 32 个光纤束被识别为错误。与 FACT 方法相比,FTACTD 方法减少了 36.38% 的 NC 数量。在 VC 方面,FTACTD 方法甚至比确定性方法中性能最好的 SD_Stream 方法高出 1.64%。广泛的体内实验证明了所提出的方法在跟踪更准确、更完整的纤维路径方面的优越性,从而改善了连续性:结论:本研究提出的 FTACTD 方法显示出卓越的追踪效果,为调查、诊断和治疗与白质纤维缺失和异常相关的脑部疾病提供了方法论基础。
{"title":"White Matter Fiber Tracking Method with Adaptive Correction of Tracking Direction.","authors":"Qian Zheng, Kefu Guo, Yinghui Meng, Jiaofen Nan, Lin Xu","doi":"10.1155/2024/4102461","DOIUrl":"10.1155/2024/4102461","url":null,"abstract":"<p><strong>Background: </strong>The deterministic fiber tracking method has the advantage of high computational efficiency and good repeatability, making it suitable for the noninvasive estimation of brain structural connectivity in clinical fields. To address the issue of the current classical deterministic method tending to deviate in the tracking direction in the region of crossing fiber region, in this paper, we propose an adaptive correction-based deterministic white matter fiber tracking method, named FTACTD.</p><p><strong>Methods: </strong>The proposed FTACTD method can accurately track white matter fibers by adaptively adjusting the deflection direction strategy based on the tensor matrix and the input fiber direction of adjacent voxels. The degree of correction direction changes adaptively according to the shape of the diffusion tensor, mimicking the actual tracking deflection angle and direction. Furthermore, both forward and reverse tracking techniques are employed to track the entire fiber. The effectiveness of the proposed method is validated and quantified using both simulated and real brain datasets. Various indicators such as invalid bundles (IB), valid bundles (VB), invalid connections (IC), no connections (NC), and valid connections (VC) are utilized to assess the performance of the proposed method on simulated data and real diffusion-weighted imaging (DWI) data.</p><p><strong>Results: </strong>The experimental results of the simulated data show that the FTACTD method tracks outperform existing methods, achieving the highest number of VB with a total of 13 bundles. Additionally, it identifies the least number of incorrect fiber bundles, with only 32 bundles identified as wrong. Compared to the FACT method, the FTACTD method reduces the number of NC by 36.38%. In terms of VC, the FTACTD method surpasses even the best performing SD_Stream method among deterministic methods by 1.64%. Extensive in vivo experiments demonstrate the superiority of the proposed method in terms of tracking more accurate and complete fiber paths, resulting in improved continuity.</p><p><strong>Conclusion: </strong>The FTACTD method proposed in this study indicates superior tracking results and provides a methodological basis for the investigating, diagnosis, and treatment of brain disorders associated with white matter fiber deficits and abnormalities.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2024 ","pages":"4102461"},"PeriodicalIF":7.6,"publicationDate":"2024-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10861278/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139724434","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
International Journal of Biomedical Imaging
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1