首页 > 最新文献

Computer methods and programs in biomedicine最新文献

英文 中文
ATOMMIC: An Advanced Toolbox for Multitask Medical Imaging Consistency to facilitate Artificial Intelligence applications from acquisition to analysis in Magnetic Resonance Imaging ATOMMIC:多任务医学影像一致性高级工具箱,促进磁共振成像从采集到分析的人工智能应用
IF 4.9 2区 医学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2024-08-22 DOI: 10.1016/j.cmpb.2024.108377

Background and Objectives:

Artificial intelligence (AI) is revolutionizing Magnetic Resonance Imaging (MRI) along the acquisition and processing chain. Advanced AI frameworks have been applied in various successive tasks, such as image reconstruction, quantitative parameter map estimation, and image segmentation. However, existing frameworks are often designed to perform tasks independently of each other or are focused on specific models or single datasets, limiting generalization. This work introduces the Advanced Toolbox for Multitask Medical Imaging Consistency (ATOMMIC), a novel open-source toolbox that streamlines AI applications for accelerated MRI reconstruction and analysis. ATOMMIC implements several tasks using deep learning (DL) models and enables MultiTask Learning (MTL) to perform related tasks in an integrated manner, targeting generalization in the MRI domain.

Methods:

We conducted a comprehensive literature review and analyzed 12,479 GitHub repositories to assess the current landscape of AI frameworks for MRI. Subsequently, we demonstrate how ATOMMIC standardizes workflows and improves data interoperability, enabling effective benchmarking of various DL models across MRI tasks and datasets. To showcase ATOMMIC’s capabilities, we evaluated twenty-five DL models on eight publicly available datasets, focusing on accelerated MRI reconstruction, segmentation, quantitative parameter map estimation, and joint accelerated MRI reconstruction and segmentation using MTL.

Results:

ATOMMIC’s high-performance training and testing capabilities, utilizing multiple GPUs and mixed precision support, enable efficient benchmarking of multiple models across various tasks. The framework’s modular architecture implements each task through a collection of data loaders, models, loss functions, evaluation metrics, and pre-processing transformations, facilitating seamless integration of new tasks, datasets, and models. Our findings demonstrate that ATOMMIC supports MTL for multiple MRI tasks with harmonized complex-valued and real-valued data support while maintaining active development and documentation. Task-specific evaluations demonstrate that physics-based models outperform other approaches in reconstructing highly accelerated acquisitions. These high-quality reconstruction models also show superior accuracy in estimating quantitative parameter maps. Furthermore, when combining high-performing reconstruction models with robust segmentation networks through MTL, performance is improved in both tasks.

Conclusions:

ATOMMIC advances MRI reconstruction and analysis by leveraging MTL and ensuring consistency across tasks, models, and datasets. This comprehensive framework serves as a versatile platform for researchers to use existing AI methods and develop new approaches in medical imaging.

背景与目标:人工智能(AI)正在彻底改变磁共振成像(MRI)的采集和处理链。先进的人工智能框架已被应用于各种连续任务,如图像重建、定量参数图估计和图像分割。然而,现有的框架往往是为独立执行任务而设计的,或者侧重于特定模型或单一数据集,从而限制了通用性。这项工作介绍了多任务医学影像一致性高级工具箱(ATOMMIC),这是一个新颖的开源工具箱,可简化加速核磁共振成像重建和分析的人工智能应用。ATOMMIC利用深度学习(DL)模型实现了多项任务,并使多任务学习(MTL)能够以集成的方式执行相关任务,目标是在核磁共振成像领域实现通用化。方法:我们进行了全面的文献综述,并分析了12479个GitHub软件仓库,以评估当前核磁共振成像人工智能框架的现状。随后,我们展示了 ATOMMIC 如何标准化工作流程并提高数据互操作性,从而在核磁共振成像任务和数据集上对各种 DL 模型进行有效的基准测试。为了展示ATOMMIC的能力,我们在八个公开可用的数据集上评估了25个DL模型,重点是加速核磁共振成像重建、分割、定量参数图估计以及使用MTL的联合加速核磁共振成像重建和分割。该框架的模块化架构通过一系列数据加载器、模型、损失函数、评估指标和预处理转换来实现每项任务,从而促进了新任务、数据集和模型的无缝集成。我们的研究结果表明,ATOMMIC 支持多种核磁共振成像任务的 MTL,并提供统一的复值和实值数据支持,同时保持积极的开发和文档编制。针对特定任务的评估表明,基于物理的模型在重建高度加速的采集方面优于其他方法。这些高质量的重建模型在估算定量参数图方面也表现出更高的准确性。结论:ATOMMIC 利用 MTL 并确保任务、模型和数据集之间的一致性,推动了 MRI 重建和分析的发展。这个全面的框架为研究人员使用现有的人工智能方法和开发医学成像新方法提供了一个多功能平台。
{"title":"ATOMMIC: An Advanced Toolbox for Multitask Medical Imaging Consistency to facilitate Artificial Intelligence applications from acquisition to analysis in Magnetic Resonance Imaging","authors":"","doi":"10.1016/j.cmpb.2024.108377","DOIUrl":"10.1016/j.cmpb.2024.108377","url":null,"abstract":"<div><h3>Background and Objectives:</h3><p>Artificial intelligence (AI) is revolutionizing Magnetic Resonance Imaging (MRI) along the acquisition and processing chain. Advanced AI frameworks have been applied in various successive tasks, such as image reconstruction, quantitative parameter map estimation, and image segmentation. However, existing frameworks are often designed to perform tasks independently of each other or are focused on specific models or single datasets, limiting generalization. This work introduces the Advanced Toolbox for Multitask Medical Imaging Consistency (ATOMMIC), a novel open-source toolbox that streamlines AI applications for accelerated MRI reconstruction and analysis. ATOMMIC implements several tasks using deep learning (DL) models and enables MultiTask Learning (MTL) to perform related tasks in an integrated manner, targeting generalization in the MRI domain.</p></div><div><h3>Methods:</h3><p>We conducted a comprehensive literature review and analyzed 12,479 GitHub repositories to assess the current landscape of AI frameworks for MRI. Subsequently, we demonstrate how ATOMMIC standardizes workflows and improves data interoperability, enabling effective benchmarking of various DL models across MRI tasks and datasets. To showcase ATOMMIC’s capabilities, we evaluated twenty-five DL models on eight publicly available datasets, focusing on accelerated MRI reconstruction, segmentation, quantitative parameter map estimation, and joint accelerated MRI reconstruction and segmentation using MTL.</p></div><div><h3>Results:</h3><p>ATOMMIC’s high-performance training and testing capabilities, utilizing multiple GPUs and mixed precision support, enable efficient benchmarking of multiple models across various tasks. The framework’s modular architecture implements each task through a collection of data loaders, models, loss functions, evaluation metrics, and pre-processing transformations, facilitating seamless integration of new tasks, datasets, and models. Our findings demonstrate that ATOMMIC supports MTL for multiple MRI tasks with harmonized complex-valued and real-valued data support while maintaining active development and documentation. Task-specific evaluations demonstrate that physics-based models outperform other approaches in reconstructing highly accelerated acquisitions. These high-quality reconstruction models also show superior accuracy in estimating quantitative parameter maps. Furthermore, when combining high-performing reconstruction models with robust segmentation networks through MTL, performance is improved in both tasks.</p></div><div><h3>Conclusions:</h3><p>ATOMMIC advances MRI reconstruction and analysis by leveraging MTL and ensuring consistency across tasks, models, and datasets. This comprehensive framework serves as a versatile platform for researchers to use existing AI methods and develop new approaches in medical imaging.</p></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":null,"pages":null},"PeriodicalIF":4.9,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0169260724003705/pdfft?md5=cbdf31e815c85c2c4da5babf503b1cb8&pid=1-s2.0-S0169260724003705-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142048586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluation of tumor budding with virtual panCK stains generated by novel multi-model CNN framework 利用新型多模型 CNN 框架生成的虚拟 panCK 染色体评估肿瘤出芽情况。
IF 4.9 2区 医学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2024-08-22 DOI: 10.1016/j.cmpb.2024.108352

As the global incidence of cancer continues to rise rapidly, the need for swift and precise diagnoses has become increasingly pressing. Pathologists commonly rely on H&E-panCK stain pairs for various aspects of cancer diagnosis, including the detection of occult tumor cells and the evaluation of tumor budding. Nevertheless, conventional chemical staining methods suffer from notable drawbacks, such as time-intensive processes and irreversible staining outcomes. The virtual stain technique, leveraging generative adversarial network (GAN), has emerged as a promising alternative to chemical stains. This approach aims to transform biopsy scans (often H&E) into other stain types. Despite achieving notable progress in recent years, current state-of-the-art virtual staining models confront challenges that hinder their efficacy, particularly in achieving accurate staining outcomes under specific conditions. These limitations have impeded the practical integration of virtual staining into diagnostic practices. To address the goal of producing virtual panCK stains capable of replacing chemical panCK, we propose an innovative multi-model framework. Our approach involves employing a combination of Mask-RCNN (for cell segmentation) and GAN models to extract cytokeratin distribution from chemical H&E images. Additionally, we introduce a tailored dynamic GAN model to convert H&E images into virtual panCK stains, integrating the derived cytokeratin distribution. Our framework is motivated by the fact that the unique pattern of the panCK is derived from cytokeratin distribution. As a proof of concept, we employ our virtual panCK stains to evaluate tumor budding in 45 H&E whole-slide images taken from breast cancer-invaded lymph nodes . Through thorough validation by both pathologists and the QuPath software, our virtual panCK stains demonstrate a remarkable level of accuracy. In stark contrast, the accuracy of state-of-the-art single cycleGAN virtual panCK stains is negligible. To our best knowledge, this is the first instance of a multi-model virtual panCK framework and the utilization of virtual panCK for tumor budding assessment. Our framework excels in generating dependable virtual panCK stains with significantly improved efficiency, thereby considerably reducing turnaround times in diagnosis. Furthermore, its outcomes are easily comprehensible even to pathologists who may not be well-versed in computer technology. We firmly believe that our framework has the potential to advance the field of virtual stain, thereby making significant strides towards improved cancer diagnosis.

随着全球癌症发病率的持续快速上升,对快速、精确诊断的需求日益迫切。病理学家通常依靠 H&E-panCK 染色对来进行癌症诊断的各个方面,包括检测隐匿的肿瘤细胞和评估肿瘤萌芽。然而,传统的化学染色方法存在明显的缺点,如过程耗时长、染色结果不可逆等。利用生成式对抗网络(GAN)的虚拟染色技术已成为化学染色的一种有前途的替代方法。这种方法旨在将活检扫描(通常为 H&E)转化为其他染色类型。尽管近年来取得了显著进展,但目前最先进的虚拟染色模型仍面临着阻碍其功效的挑战,尤其是在特定条件下实现准确染色结果方面。这些局限性阻碍了虚拟染色与诊断实践的实际结合。为了实现制作能够替代化学 panCK 的虚拟 panCK 染色的目标,我们提出了一种创新的多模型框架。我们的方法是结合使用 Mask-RCNN(用于细胞分割)和 GAN 模型,从化学 H&E 图像中提取细胞角蛋白分布。此外,我们还引入了一个量身定制的动态 GAN 模型,将 H&E 图像转换成虚拟的 PanCK 染色图,并将得出的细胞角蛋白分布进行整合。panCK的独特模式源自细胞角蛋白分布,这一事实激发了我们的框架。作为概念验证,我们利用我们的虚拟 panCK 染色技术,对 45 张取自乳腺癌侵犯淋巴结的 H&E 全切片图像中的肿瘤出芽情况进行了评估。通过病理学家和 QuPath 软件的全面验证,我们的虚拟 panCK 染色法显示出了非凡的准确性。与此形成鲜明对比的是,最先进的单周期基因组学虚拟 panCK 染色的准确性几乎可以忽略不计。据我们所知,这是首个多模型虚拟 panCK 框架和利用虚拟 panCK 评估肿瘤萌芽的实例。我们的框架在生成可靠的虚拟 panCK 染色方面表现出色,效率显著提高,从而大大缩短了诊断周转时间。此外,即使是不精通计算机技术的病理学家也能轻松理解其结果。我们坚信,我们的框架有望推动虚拟染色领域的发展,从而在改善癌症诊断方面取得重大进展。
{"title":"Evaluation of tumor budding with virtual panCK stains generated by novel multi-model CNN framework","authors":"","doi":"10.1016/j.cmpb.2024.108352","DOIUrl":"10.1016/j.cmpb.2024.108352","url":null,"abstract":"<div><p>As the global incidence of cancer continues to rise rapidly, the need for swift and precise diagnoses has become increasingly pressing. Pathologists commonly rely on H&amp;E-panCK stain pairs for various aspects of cancer diagnosis, including the detection of occult tumor cells and the evaluation of tumor budding. Nevertheless, conventional chemical staining methods suffer from notable drawbacks, such as time-intensive processes and irreversible staining outcomes. The virtual stain technique, leveraging generative adversarial network (GAN), has emerged as a promising alternative to chemical stains. This approach aims to transform biopsy scans (often H&amp;E) into other stain types. Despite achieving notable progress in recent years, current state-of-the-art virtual staining models confront challenges that hinder their efficacy, particularly in achieving accurate staining outcomes under specific conditions. These limitations have impeded the practical integration of virtual staining into diagnostic practices. To address the goal of producing virtual panCK stains capable of replacing chemical panCK, we propose an innovative multi-model framework. Our approach involves employing a combination of Mask-RCNN (for cell segmentation) and GAN models to extract cytokeratin distribution from chemical H&amp;E images. Additionally, we introduce a tailored dynamic GAN model to convert H&amp;E images into virtual panCK stains, integrating the derived cytokeratin distribution. Our framework is motivated by the fact that the unique pattern of the panCK is derived from cytokeratin distribution. As a proof of concept, we employ our virtual panCK stains to evaluate tumor budding in 45 H&amp;E whole-slide images taken from breast cancer-invaded lymph nodes . Through thorough validation by both pathologists and the QuPath software, our virtual panCK stains demonstrate a remarkable level of accuracy. In stark contrast, the accuracy of state-of-the-art single cycleGAN virtual panCK stains is negligible. To our best knowledge, this is the first instance of a multi-model virtual panCK framework and the utilization of virtual panCK for tumor budding assessment. Our framework excels in generating dependable virtual panCK stains with significantly improved efficiency, thereby considerably reducing turnaround times in diagnosis. Furthermore, its outcomes are easily comprehensible even to pathologists who may not be well-versed in computer technology. We firmly believe that our framework has the potential to advance the field of virtual stain, thereby making significant strides towards improved cancer diagnosis.</p></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":null,"pages":null},"PeriodicalIF":4.9,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142145348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On the application of hybrid deep 3D convolutional neural network algorithms for predicting the micromechanics of brain white matter 论混合深度三维卷积神经网络算法在预测脑白质微观力学中的应用
IF 4.9 2区 医学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2024-08-22 DOI: 10.1016/j.cmpb.2024.108381

Background:

Material characterization of brain white matter (BWM) is difficult due to the anisotropy inherent to the three-dimensional microstructure and the various interactions between heterogeneous brain-tissue (axon, myelin, and glia). Developing full scale finite element models that accurately represent the relationship between the micro and macroscale BWM is however extremely challenging and computationally expensive. The anisotropic properties of the microstructure of BWM computed by building unit cells under frequency domain viscoelasticity comprises of 36 individual constants each, for the loss and storage moduli. Furthermore, the architecture of each unit cell is arbitrary in an infinite dataset.

Methods:

In this study, we extend our previous work on developing representative volume elements (RVE) of the microstructure of the BWM in the frequency domain to develop 3D deep learning algorithms that can predict the anisotropic composite properties. The deep 3D convolutional neural network (CNN) algorithms utilizes a voxelization method to obtain geometry information from 3D RVEs. The architecture information encoded in the voxelized location is employed as input data while cross-referencing the RVEs’ material properties (output data). We further develop methods by incorporating parallel pathways, Residual Neural Networks and inception modulus that improve the efficiency of deep learning algorithms.

Results:

This paper presents different CNN algorithms in predicting the anisotropic composite properties of BWM. A quantitative analysis of the individual algorithms is presented with the view of identifying optimal strategies to interpret the combined measurements of brain MRE and DTI.

Significance:

The proposed Multiscale 3D ResNet (M3DR) algorithm demonstrates high learning ability and performance over baseline CNN algorithms in predicting BWM tissue properties. The hybrid M3DR framework also overcomes the significant limitations encountered in modeling brain tissue using finite elements alone including those such as high computational cost, mesh and simulation failure. The proposed framework also provides an efficient and streamlined platform for implementing complex boundary conditions, modeling intrinsic material properties and imparting interfacial architecture information.

背景:由于三维微观结构固有的各向异性以及异质脑组织(轴突、髓鞘和胶质细胞)之间的各种相互作用,脑白质(BWM)的材料表征非常困难。然而,开发能准确表示微观和宏观脑组织之间关系的全尺寸有限元模型极具挑战性,且计算成本高昂。在频域粘弹性条件下,通过构建单元格计算出的 BWM 微观结构的各向异性属性包括损耗模量和存储模量各 36 个独立常数。此外,在一个无限数据集中,每个单元格的结构都是任意的。方法:在本研究中,我们扩展了之前在频域中开发 BWM 微观结构代表性体积元素(RVE)的工作,开发出了可预测各向异性复合材料特性的三维深度学习算法。深度三维卷积神经网络(CNN)算法利用体素化方法从三维 RVE 中获取几何信息。体素化位置中编码的结构信息被用作输入数据,同时交叉引用 RVE 的材料属性(输出数据)。结果:本文介绍了预测 BWM 各向异性复合材料属性的不同 CNN 算法。意义:与基线 CNN 算法相比,所提出的多尺度 3D ResNet(M3DR)算法在预测 BWM 组织属性方面表现出较高的学习能力和性能。混合 M3DR 框架还克服了仅使用有限元对脑组织建模时遇到的重大限制,包括计算成本高、网格和模拟失败等问题。所提出的框架还提供了一个高效、精简的平台,可用于实施复杂的边界条件、建立内在材料属性模型和传递界面结构信息。
{"title":"On the application of hybrid deep 3D convolutional neural network algorithms for predicting the micromechanics of brain white matter","authors":"","doi":"10.1016/j.cmpb.2024.108381","DOIUrl":"10.1016/j.cmpb.2024.108381","url":null,"abstract":"<div><h3>Background:</h3><p>Material characterization of brain white matter (BWM) is difficult due to the anisotropy inherent to the three-dimensional microstructure and the various interactions between heterogeneous brain-tissue (axon, myelin, and glia). Developing full scale finite element models that accurately represent the relationship between the micro and macroscale BWM is however extremely challenging and computationally expensive. The anisotropic properties of the microstructure of BWM computed by building unit cells under frequency domain viscoelasticity comprises of 36 individual constants each, for the loss and storage moduli. Furthermore, the architecture of each unit cell is arbitrary in an infinite dataset.</p></div><div><h3>Methods:</h3><p>In this study, we extend our previous work on developing representative volume elements (RVE) of the microstructure of the BWM in the frequency domain to develop 3D deep learning algorithms that can predict the anisotropic composite properties. The deep 3D convolutional neural network (CNN) algorithms utilizes a voxelization method to obtain geometry information from 3D RVEs. The architecture information encoded in the voxelized location is employed as input data while cross-referencing the RVEs’ material properties (output data). We further develop methods by incorporating parallel pathways, Residual Neural Networks and inception modulus that improve the efficiency of deep learning algorithms.</p></div><div><h3>Results:</h3><p>This paper presents different CNN algorithms in predicting the anisotropic composite properties of BWM. A quantitative analysis of the individual algorithms is presented with the view of identifying optimal strategies to interpret the combined measurements of brain MRE and DTI.</p></div><div><h3>Significance:</h3><p>The proposed Multiscale 3D ResNet (M3DR) algorithm demonstrates high learning ability and performance over baseline CNN algorithms in predicting BWM tissue properties. The hybrid M3DR framework also overcomes the significant limitations encountered in modeling brain tissue using finite elements alone including those such as high computational cost, mesh and simulation failure. The proposed framework also provides an efficient and streamlined platform for implementing complex boundary conditions, modeling intrinsic material properties and imparting interfacial architecture information.</p></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":null,"pages":null},"PeriodicalIF":4.9,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0169260724003742/pdfft?md5=818301a4a6839e1068c6a07de77c2cc5&pid=1-s2.0-S0169260724003742-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142129474","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An interpretable tinnitus prediction framework using gap-prepulse inhibition in auditory late response and electroencephalogram 利用听觉晚期反应和脑电图中的间隙-脉冲抑制,建立可解释的耳鸣预测框架
IF 4.9 2区 医学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2024-08-21 DOI: 10.1016/j.cmpb.2024.108371

Background and Objective

Tinnitus is a neuropathological condition that results in mild buzzing or ringing of the ears without an external sound source. Current tinnitus diagnostic methods often rely on subjective assessment and require intricate medical examinations. This study aimed to propose an interpretable tinnitus diagnostic framework using auditory late response (ALR) and electroencephalogram (EEG), inspired by the gap-prepulse inhibition (GPI) paradigm.

Methods

We collected spontaneous EEG and ALR data from 44 patients with tinnitus and 47 hearing loss-matched controls using specialized hardware to capture responses to sound stimuli with embedded gaps. In this cohort study of tinnitus and control groups, we examined EEG spectral and ALR features of N-P complexes, comparing the responses to gap durations of 50 and 20 ms alongside no-gap conditions. To this end, we developed an interpretable tinnitus diagnostic model using ALR and EEG metrics, boosting machine learning architecture, and explainable feature attribution approaches.

Results

Our proposed model achieved 90 % accuracy in identifying tinnitus, with an area under the performance curve of 0.89. The explainable artificial intelligence approaches have revealed gap-embedded ALR features such as the GPI ratio of N1-P2 and EEG spectral ratio, which can serve as diagnostic metrics for tinnitus. Our method successfully provides personalized prediction explanations for tinnitus diagnosis using gap-embedded auditory and neurological features.

Conclusions

Deficits in GPI alongside activity in the EEG alpha-beta ratio offer a promising screening tool for assessing tinnitus risk, aligning with current clinical insights from hearing research.

背景和目的耳鸣是一种神经病理症状,会在没有外部声源的情况下出现轻微的嗡嗡声或耳鸣。目前的耳鸣诊断方法通常依赖于主观评估,并且需要复杂的医学检查。本研究的目的是受间隙-脉冲抑制(GPI)范式的启发,利用听觉晚期反应(ALR)和脑电图(EEG)提出一种可解释的耳鸣诊断框架。方法我们利用专门的硬件捕捉对嵌入间隙的声音刺激的反应,收集了 44 名耳鸣患者和 47 名听力损失匹配对照者的自发脑电图和 ALR 数据。在这项关于耳鸣患者和对照组的队列研究中,我们检查了 N-P 复合物的脑电图频谱和 ALR 特征,比较了 50 毫秒和 20 毫秒间隙持续时间与无间隙条件下的反应。为此,我们利用 ALR 和脑电图指标、增强型机器学习架构和可解释的特征归因方法开发了一个可解释的耳鸣诊断模型。结果我们提出的模型在识别耳鸣方面达到了 90% 的准确率,性能曲线下面积为 0.89。可解释人工智能方法揭示了嵌入间隙的 ALR 特征,如 N1-P2 的 GPI 比值和脑电图频谱比值,这些特征可作为耳鸣的诊断指标。我们的方法利用嵌入间隙的听觉和神经特征,成功地为耳鸣诊断提供了个性化的预测解释。结论 GPI 的缺陷与脑电图阿尔法-贝塔比值的活动为评估耳鸣风险提供了一种很有前景的筛查工具,符合当前听力研究的临床见解。
{"title":"An interpretable tinnitus prediction framework using gap-prepulse inhibition in auditory late response and electroencephalogram","authors":"","doi":"10.1016/j.cmpb.2024.108371","DOIUrl":"10.1016/j.cmpb.2024.108371","url":null,"abstract":"<div><h3>Background and Objective</h3><p>Tinnitus is a neuropathological condition that results in mild buzzing or ringing of the ears without an external sound source. Current tinnitus diagnostic methods often rely on subjective assessment and require intricate medical examinations. This study aimed to propose an interpretable tinnitus diagnostic framework using auditory late response (ALR) and electroencephalogram (EEG), inspired by the gap-prepulse inhibition (GPI) paradigm.</p></div><div><h3>Methods</h3><p>We collected spontaneous EEG and ALR data from 44 patients with tinnitus and 47 hearing loss-matched controls using specialized hardware to capture responses to sound stimuli with embedded gaps. In this cohort study of tinnitus and control groups, we examined EEG spectral and ALR features of N-P complexes, comparing the responses to gap durations of 50 and 20 ms alongside no-gap conditions. To this end, we developed an interpretable tinnitus diagnostic model using ALR and EEG metrics, boosting machine learning architecture, and explainable feature attribution approaches.</p></div><div><h3>Results</h3><p>Our proposed model achieved 90 % accuracy in identifying tinnitus, with an area under the performance curve of 0.89. The explainable artificial intelligence approaches have revealed gap-embedded ALR features such as the GPI ratio of N1-P2 and EEG spectral ratio, which can serve as diagnostic metrics for tinnitus. Our method successfully provides personalized prediction explanations for tinnitus diagnosis using gap-embedded auditory and neurological features.</p></div><div><h3>Conclusions</h3><p>Deficits in GPI alongside activity in the EEG alpha-beta ratio offer a promising screening tool for assessing tinnitus risk, aligning with current clinical insights from hearing research.</p></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":null,"pages":null},"PeriodicalIF":4.9,"publicationDate":"2024-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142021437","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Physically informed deep neural networks for metabolite-corrected plasma input function estimation in dynamic PET imaging 用于动态 PET 成像中代谢物校正血浆输入函数估计的物理信息深度神经网络
IF 4.9 2区 医学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2024-08-20 DOI: 10.1016/j.cmpb.2024.108375

Introduction:

We propose a novel approach for the non-invasive quantification of dynamic PET imaging data, focusing on the arterial input function (AIF) without the need for invasive arterial cannulation.

Methods:

Our method utilizes a combination of three-dimensional depth-wise separable convolutional layers and a physically informed deep neural network to incorporatea priori knowledge about the AIF’s functional form and shape, enabling precise predictions of the concentrations of [11C]PBR28 in whole blood and the free tracer in metabolite-corrected plasma.

Results:

We found a robust linear correlation between our model’s predicted AIF curves and those obtained through traditional, invasive measurements. We achieved an average cross-validated Pearson correlation of 0.86 for whole blood and 0.89 for parent plasma curves. Moreover, our method’s ability to estimate the volumes of distribution across several key brain regions – without significant differences between the use of predicted versus actual AIFs in a two-tissue compartmental model – successfully captures the intrinsic variability related to sex, the binding affinity of the translocator protein (18 kDa), and age.

Conclusions:

These results not only validate our method’s accuracy and reliability but also establish a foundation for a streamlined, non-invasive approach to dynamic PET data quantification. By offering a precise and less invasive alternative to traditional quantification methods, our technique holds significant promise for expanding the applicability of PET imaging across a wider range of tracers, thereby enhancing its utility in both clinical research and diagnostic settings.

导读:我们提出了一种无创量化动态 PET 成像数据的新方法,重点是动脉输入功能(AIF),无需进行有创动脉插管。方法:我们的方法结合使用了三维深度可分离卷积层和物理信息深度神经网络,纳入了有关 AIF 功能形式和形状的先验知识,从而能够精确预测全血中 [11C]PBR28 的浓度和代谢物校正血浆中游离示踪剂的浓度。经交叉验证,全血和母血浆曲线的平均皮尔逊相关性分别为 0.86 和 0.89。结论:这些结果不仅验证了我们方法的准确性和可靠性,还为动态 PET 数据量化的简化、非侵入性方法奠定了基础。我们的技术为传统的量化方法提供了一种精确且创伤较小的替代方法,有望扩大 PET 成像在更多示踪剂中的应用范围,从而提高其在临床研究和诊断中的实用性。
{"title":"Physically informed deep neural networks for metabolite-corrected plasma input function estimation in dynamic PET imaging","authors":"","doi":"10.1016/j.cmpb.2024.108375","DOIUrl":"10.1016/j.cmpb.2024.108375","url":null,"abstract":"<div><h3>Introduction:</h3><p>We propose a novel approach for the non-invasive quantification of dynamic PET imaging data, focusing on the arterial input function (AIF) without the need for invasive arterial cannulation.</p></div><div><h3>Methods:</h3><p>Our method utilizes a combination of three-dimensional depth-wise separable convolutional layers and a physically informed deep neural network to incorporate<em>a priori</em> knowledge about the AIF’s functional form and shape, enabling precise predictions of the concentrations of [<span><math><mrow><msup><mrow></mrow><mrow><mn>11</mn></mrow></msup><mi>C</mi></mrow></math></span>]PBR28 in whole blood and the free tracer in metabolite-corrected plasma.</p></div><div><h3>Results:</h3><p>We found a robust linear correlation between our model’s predicted AIF curves and those obtained through traditional, invasive measurements. We achieved an average cross-validated Pearson correlation of 0.86 for whole blood and 0.89 for parent plasma curves. Moreover, our method’s ability to estimate the volumes of distribution across several key brain regions – without significant differences between the use of predicted versus actual AIFs in a two-tissue compartmental model – successfully captures the intrinsic variability related to sex, the binding affinity of the translocator protein (18 kDa), and age.</p></div><div><h3>Conclusions:</h3><p>These results not only validate our method’s accuracy and reliability but also establish a foundation for a streamlined, non-invasive approach to dynamic PET data quantification. By offering a precise and less invasive alternative to traditional quantification methods, our technique holds significant promise for expanding the applicability of PET imaging across a wider range of tracers, thereby enhancing its utility in both clinical research and diagnostic settings.</p></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":null,"pages":null},"PeriodicalIF":4.9,"publicationDate":"2024-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0169260724003687/pdfft?md5=259ff7c1b33c404b9993266e1dee4964&pid=1-s2.0-S0169260724003687-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142048587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Taking measurement in every direction: Implicit scene representation for accurately estimating target dimensions under monocular endoscope 全方位测量在单目内窥镜下准确估计目标尺寸的隐式场景表示法
IF 4.9 2区 医学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2024-08-19 DOI: 10.1016/j.cmpb.2024.108380

Background and objectives:

In endoscopy, measurement of target size can assist medical diagnosis. However, limited operating space, low image quality, and irregular target shape pose great challenges to traditional vision-based measurement methods.

Methods:

In this paper, we propose a novel approach to measure irregular target size under monocular endoscope using image rendering. Firstly synthesize virtual poses on the same main optical axis as known camera poses, and use implicit neural representation module that considers brightness and target boundaries to render images corresponding to virtual poses. Then, Swin-Unet and rotating calipers are utilized to obtain maximum pixel length of the target in image pairs with the same main optical axis. Finally, the similarity triangle relationship of the endoscopic imaging model is used to measure the size of the target.

Results:

The evaluation is conducted using renal stone fragments of patients which are placed in the kidney model and the isolated porcine kidney. The mean error of measurement is 0.12 mm.

Conclusions:

The approached method can automatically measure object size within narrow body cavities in any visible direction. It improves the effectiveness and accuracy of measurement in limited endoscopic space.

背景和目的:在内窥镜检查中,测量目标大小可以帮助医疗诊断。方法:本文提出了一种利用图像渲染在单目内窥镜下测量不规则目标尺寸的新方法。首先在与已知摄像机姿势相同的主光轴上合成虚拟姿势,并使用考虑亮度和目标边界的隐式神经表示模块渲染虚拟姿势对应的图像。然后,利用 Swin-Unet 和旋转卡尺获取同一主光轴图像对中目标的最大像素长度。最后,利用内窥镜成像模型的相似三角形关系来测量目标的大小。结果:评估使用了放置在肾脏模型和离体猪肾中的患者肾结石碎片。结论:所采用的方法可以在任何可见方向自动测量狭窄体腔内的目标大小。它提高了在有限的内窥镜空间内测量的有效性和准确性。
{"title":"Taking measurement in every direction: Implicit scene representation for accurately estimating target dimensions under monocular endoscope","authors":"","doi":"10.1016/j.cmpb.2024.108380","DOIUrl":"10.1016/j.cmpb.2024.108380","url":null,"abstract":"<div><h3>Background and objectives:</h3><p>In endoscopy, measurement of target size can assist medical diagnosis. However, limited operating space, low image quality, and irregular target shape pose great challenges to traditional vision-based measurement methods.</p></div><div><h3>Methods:</h3><p>In this paper, we propose a novel approach to measure irregular target size under monocular endoscope using image rendering. Firstly synthesize virtual poses on the same main optical axis as known camera poses, and use implicit neural representation module that considers brightness and target boundaries to render images corresponding to virtual poses. Then, Swin-Unet and rotating calipers are utilized to obtain maximum pixel length of the target in image pairs with the same main optical axis. Finally, the similarity triangle relationship of the endoscopic imaging model is used to measure the size of the target.</p></div><div><h3>Results:</h3><p>The evaluation is conducted using renal stone fragments of patients which are placed in the kidney model and the isolated porcine kidney. The mean error of measurement is 0.12 mm.</p></div><div><h3>Conclusions:</h3><p>The approached method can automatically measure object size within narrow body cavities in any visible direction. It improves the effectiveness and accuracy of measurement in limited endoscopic space.</p></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":null,"pages":null},"PeriodicalIF":4.9,"publicationDate":"2024-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142044479","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Very fast, high-resolution aggregation 3D detection CAM to quickly and accurately find facial fracture areas 极快的高分辨率聚合 3D 检测 CAM,可快速准确地找到面部骨折区域
IF 4.9 2区 医学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2024-08-19 DOI: 10.1016/j.cmpb.2024.108379

Background and objective:

The incidence of facial fractures is on the rise globally, yet limited studies are addressing the diverse forms of facial fractures present in 3D images. In particular, due to the nature of the facial fracture, the direction in which the bone fractures vary, and there is no clear outline, it is difficult to determine the exact location of the fracture in 2D images. Thus, 3D image analysis is required to find the exact fracture area, but it needs heavy computational complexity and expensive pixel-wise labeling for supervised learning. In this study, we tackle the problem of reducing the computational burden and increasing the accuracy of fracture localization by using a weakly-supervised object localization without pixel-wise labeling in a 3D image space.

Methods:

We propose a Very Fast, High-Resolution Aggregation 3D Detection CAM (VFHA-CAM) model, which can detect various facial fractures. To better detect tiny fractures, our model uses high-resolution feature maps and employs Ablation CAM to find an exact fracture location without pixel-wise labeling, where we use a rough fracture image detected with 3D box-wise labeling. To this end, we extract important features and use only essential features to reduce the computational complexity in 3D image space.

Results:

Experimental findings demonstrate that VFHA-CAM surpasses state-of-the-art 2D detection methods by up to 20% in sensitivity/person and specificity/person, achieving sensitivity/person and specificity/person scores of 87% and 85%, respectively. In addition, Our VFHA-CAM reduces location analysis time to 76 s without performance degradation compared to a simple Ablation CAM method that takes more than 20 min.

Conclusion:

This study introduces a novel weakly-supervised object localization approach for bone fracture detection in 3D facial images. The proposed method employs a 3D detection model, which helps detect various forms of facial bone fractures accurately. The CAM algorithm adopted for fracture area segmentation within a 3D fracture detection box is key in quickly informing medical staff of the exact location of a facial bone fracture in a weakly-supervised object localization. In addition, we provide 3D visualization so that even non-experts unfamiliar with 3D CT images can identify the fracture status and location.

背景和目的:面部骨折的发病率在全球范围内呈上升趋势,但针对面部骨折在三维图像中的不同表现形式的研究却十分有限。特别是,由于面部骨折的性质、骨折方向的不同以及没有清晰的轮廓,很难在二维图像中确定骨折的确切位置。因此,需要通过三维图像分析来找到准确的骨折区域,但这需要很高的计算复杂度和昂贵的像素标注监督学习。在本研究中,我们通过在三维图像空间中使用弱监督对象定位(无需像素标注)来解决减轻计算负担和提高骨折定位准确性的问题。方法:我们提出了一种快速、高分辨率聚合三维检测 CAM(VFHA-CAM)模型,它可以检测各种面部骨折。为了更好地检测微小骨折,我们的模型使用了高分辨率特征图,并采用了消融 CAM 技术,在不进行像素标注的情况下找到准确的骨折位置。结果:实验结果表明,VFHA-CAM 在灵敏度/人和特异度/人方面比最先进的二维检测方法高出 20%,灵敏度/人和特异度/人得分分别达到 87% 和 85%。此外,与耗时超过 20 分钟的简单消融 CAM 方法相比,我们的 VFHA-CAM 将定位分析时间缩短到 76 秒,而性能却没有下降。该方法采用三维检测模型,有助于准确检测各种形式的面部骨折。在三维骨折检测框内采用 CAM 算法进行骨折区域分割,是在弱监督对象定位中快速告知医务人员面部骨折确切位置的关键。此外,我们还提供了三维可视化功能,即使是不熟悉三维 CT 图像的非专业人员也能识别骨折状态和位置。
{"title":"Very fast, high-resolution aggregation 3D detection CAM to quickly and accurately find facial fracture areas","authors":"","doi":"10.1016/j.cmpb.2024.108379","DOIUrl":"10.1016/j.cmpb.2024.108379","url":null,"abstract":"<div><h3>Background and objective:</h3><p>The incidence of facial fractures is on the rise globally, yet limited studies are addressing the diverse forms of facial fractures present in 3D images. In particular, due to the nature of the facial fracture, the direction in which the bone fractures vary, and there is no clear outline, it is difficult to determine the exact location of the fracture in 2D images. Thus, 3D image analysis is required to find the exact fracture area, but it needs heavy computational complexity and expensive pixel-wise labeling for supervised learning. In this study, we tackle the problem of reducing the computational burden and increasing the accuracy of fracture localization by using a weakly-supervised object localization without pixel-wise labeling in a 3D image space.</p></div><div><h3>Methods:</h3><p>We propose a <em>Very Fast, High-Resolution Aggregation 3D Detection CAM (VFHA-CAM)</em> model, which can detect various facial fractures. To better detect tiny fractures, our model uses high-resolution feature maps and employs Ablation CAM to find an exact fracture location without pixel-wise labeling, where we use a rough fracture image detected with 3D box-wise labeling. To this end, we extract important features and use only essential features to reduce the computational complexity in 3D image space.</p></div><div><h3>Results:</h3><p>Experimental findings demonstrate that <em>VFHA-CAM</em> surpasses state-of-the-art 2D detection methods by up to 20% in sensitivity/person and specificity/person, achieving sensitivity/person and specificity/person scores of 87% and 85%, respectively. In addition, Our <em>VFHA-CAM</em> reduces location analysis time to 76 s without performance degradation compared to a simple Ablation CAM method that takes more than 20 min.</p></div><div><h3>Conclusion:</h3><p>This study introduces a novel weakly-supervised object localization approach for bone fracture detection in 3D facial images. The proposed method employs a 3D detection model, which helps detect various forms of facial bone fractures accurately. The CAM algorithm adopted for fracture area segmentation within a 3D fracture detection box is key in quickly informing medical staff of the exact location of a facial bone fracture in a weakly-supervised object localization. In addition, we provide 3D visualization so that even non-experts unfamiliar with 3D CT images can identify the fracture status and location.</p></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":null,"pages":null},"PeriodicalIF":4.9,"publicationDate":"2024-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0169260724003729/pdfft?md5=c20079164073497eac29e3538db213cf&pid=1-s2.0-S0169260724003729-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142097707","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Recurrence quantification analysis of rs-fMRI data: A method to detect subtle changes in the TgF344-AD rat model rs-fMRI 数据的复发量化分析:检测 TgF344-AD 大鼠模型细微变化的方法
IF 4.9 2区 医学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2024-08-16 DOI: 10.1016/j.cmpb.2024.108378

Background and objective

Alzheimer's disease (AD) is one of the leading causes of dementia, affecting the world's population at a growing rate. The preclinical stage of AD lasts over a decade, hence understanding AD-related early neuropathological effects on brain function at this stage facilitates early detection of the disease.

Methods

Resting-state functional magnetic resonance imaging (rs-fMRI) has been a powerful tool for understanding brain function, and it has been widely used in AD research. In this study, we apply Recurrence Quantification Analysis (RQA) on rs-fMRI images of 4-months (4 m) and 6-months-old (6 m) TgF344-AD rats and WT littermates to identify changes related to the AD phenotype and aging. RQA has been focused on areas of the default mode-like network (DMLN) and was performed based on Recurrence Plots (RP). RP is a mathematical representation of any dynamical system that evolves over time as a set of its state recurrences. In this paper, RPs were extracted in order to identify the affected regions of the DMLN at very early stages of AD.

Results

Using the RQA approach, we identified significant changes related to the AD phenotype at 4 m and/or 6 m in several areas of the rat DMLN including the BFB, Hippocampal fields CA1 and CA3, CG1, CG2, PrL, PtA, RSC, TeA, V1, V2. In addition, with age, brain activity of WT rats showed less predictability, while the AD rats presented reduced decline of predictability.

Conclusions

The results of this study demonstrate that RQA of rs-fMRI data is a potent approach that can detect subtle changes which might be missed by other methodologies due to the brain's non-linear dynamics. Moreover, this study provides helpful information about specific areas involved in AD pathology at very early stages of the disease in a very promising rat model of AD. Our results provide valuable information for the development of early detection methods and novel diagnosis tools for AD.

背景和目的阿尔茨海默病(AD)是导致痴呆症的主要原因之一,对全球人口的影响与日俱增。AD的临床前阶段持续十多年,因此在这一阶段了解与AD相关的早期神经病理学对大脑功能的影响有助于疾病的早期发现。方法静态功能磁共振成像(rs-fMRI)是了解大脑功能的有力工具,已广泛应用于AD研究。在这项研究中,我们对 4 个月(4 m)和 6 个月(6 m)大的 TgF344-AD 大鼠和 WT 同窝鼠的 rs-fMRI 图像进行了复发定量分析(RQA),以确定与 AD 表型和衰老相关的变化。RQA 的重点是默认模式样网络(DMLN)的区域,根据递推图(RP)进行。递归图是任何动态系统随时间演变的数学表示,是其状态递归的集合。结果利用 RQA 方法,我们在大鼠 DMLN 的多个区域(包括 BFB、海马区 CA1 和 CA3、CG1、CG2、PrL、PtA、RSC、TeA、V1、V2)发现了 4 m 和/或 6 m 时与 AD 表型相关的显著变化。此外,随着年龄的增长,WT 大鼠的大脑活动显示出较低的可预测性,而 AD 大鼠的大脑活动显示出较低的可预测性。此外,本研究还提供了在极具潜力的 AD 大鼠模型中,在 AD 病变的早期阶段发现参与 AD 病变的特定区域的有用信息。我们的研究结果为开发 AD 早期检测方法和新型诊断工具提供了宝贵的信息。
{"title":"Recurrence quantification analysis of rs-fMRI data: A method to detect subtle changes in the TgF344-AD rat model","authors":"","doi":"10.1016/j.cmpb.2024.108378","DOIUrl":"10.1016/j.cmpb.2024.108378","url":null,"abstract":"<div><h3>Background and objective</h3><p>Alzheimer's disease (AD) is one of the leading causes of dementia, affecting the world's population at a growing rate. The preclinical stage of AD lasts over a decade, hence understanding AD-related early neuropathological effects on brain function at this stage facilitates early detection of the disease.</p></div><div><h3>Methods</h3><p>Resting-state functional magnetic resonance imaging (rs-fMRI) has been a powerful tool for understanding brain function, and it has been widely used in AD research. In this study, we apply Recurrence Quantification Analysis (RQA) on rs-fMRI images of 4-months (4 m) and 6-months-old (6 m) TgF344-AD rats and WT littermates to identify changes related to the AD phenotype and aging. RQA has been focused on areas of the default mode-like network (DMLN) and was performed based on Recurrence Plots (RP). RP is a mathematical representation of any dynamical system that evolves over time as a set of its state recurrences. In this paper, RPs were extracted in order to identify the affected regions of the DMLN at very early stages of AD.</p></div><div><h3>Results</h3><p>Using the RQA approach, we identified significant changes related to the AD phenotype at 4 m and/or 6 m in several areas of the rat DMLN including the BFB, Hippocampal fields CA1 and CA3, CG1, CG2, PrL, PtA, RSC, TeA, V1, V2. In addition, with age, brain activity of WT rats showed less predictability, while the AD rats presented reduced decline of predictability.</p></div><div><h3>Conclusions</h3><p>The results of this study demonstrate that RQA of rs-fMRI data is a potent approach that can detect subtle changes which might be missed by other methodologies due to the brain's non-linear dynamics. Moreover, this study provides helpful information about specific areas involved in AD pathology at very early stages of the disease in a very promising rat model of AD. Our results provide valuable information for the development of early detection methods and novel diagnosis tools for AD.</p></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":null,"pages":null},"PeriodicalIF":4.9,"publicationDate":"2024-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142162630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An efficient dual-domain deep learning network for sparse-view CT reconstruction 用于稀疏视图 CT 重建的高效双域深度学习网络
IF 4.9 2区 医学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2024-08-16 DOI: 10.1016/j.cmpb.2024.108376

Background and Objective

We develop an efficient deep-learning based dual-domain reconstruction method for sparse-view CT reconstruction with small training parameters and comparable running time. We aim to investigate the model's capability and its clinical value by performing objective and subjective quality assessments using clinical CT projection data acquired on commercial scanners.

Methods

We designed two lightweight networks, namely Sino-Net and Img-Net, to restore the projection and image signal from the DD-Net reconstructed images in the projection and image domains, respectively. The proposed network has small training parameters and comparable running time among dual-domain based reconstruction networks and is easy to train (end-to-end). We prospectively collected clinical thoraco-abdominal CT projection data acquired on a Siemens Biograph 128 Edge CT scanner to train and validate the proposed network. Further, we quantitatively evaluated the CT Hounsfield unit (HU) values on 21 organs and anatomic structures, such as the liver, aorta, and ribcage. We also analyzed the noise properties and compared the signal-to-noise ratio (SNR) and the contrast-to-noise ratio (CNR) of the reconstructed images. Besides, two radiologists conducted the subjective qualitative evaluation including the confidence and conspicuity of anatomic structures, and the overall image quality using a 1–5 likert scoring system.

Results

Objective and subjective evaluation showed that the proposed algorithm achieves competitive results in eliminating noise and artifacts, restoring fine structure details, and recovering edges and contours of anatomic structures using 384 views (1/6 sparse rate). The proposed method exhibited good computational cost performance on clinical projection data.

Conclusion

This work presents an efficient dual-domain learning network for sparse-view CT reconstruction on raw projection data from a commercial scanner. The study also provides insights for designing an organ-based image quality assessment pipeline for sparse-view reconstruction tasks, potentially benefiting organ-specific dose reduction by sparse-view imaging.

背景和目的我们开发了一种基于深度学习的高效双域重建方法,用于稀疏视图 CT 重建,训练参数小,运行时间相当。方法我们设计了两个轻量级网络,即 Sino-Net 和 Img-Net,分别在投影域和图像域还原 DD-Net 重建图像的投影和图像信号。所提出的网络训练参数小,运行时间与基于双域的重建网络相当,并且易于训练(端到端)。我们前瞻性地收集了在西门子 Biograph 128 Edge CT 扫描仪上获取的临床胸腹 CT 投影数据,以训练和验证所提出的网络。此外,我们还定量评估了 21 个器官和解剖结构(如肝脏、主动脉和肋骨)的 CT Hounsfield 单位(HU)值。我们还分析了噪声特性,比较了重建图像的信噪比(SNR)和对比度-噪声比(CNR)。结果客观和主观评价表明,所提出的算法在消除噪声和伪影、恢复精细结构细节、恢复解剖结构的边缘和轮廓(使用 384 个视图,1/6 的稀疏率)方面取得了有竞争力的结果。该方法在临床投影数据上表现出了良好的计算成本性能。 结论 本研究提出了一种高效的双域学习网络,用于在商用扫描仪的原始投影数据上进行稀疏视图 CT 重建。这项研究还为稀疏视图重建任务设计基于器官的图像质量评估管道提供了启示,稀疏视图成像可能有利于减少特定器官的剂量。
{"title":"An efficient dual-domain deep learning network for sparse-view CT reconstruction","authors":"","doi":"10.1016/j.cmpb.2024.108376","DOIUrl":"10.1016/j.cmpb.2024.108376","url":null,"abstract":"<div><h3>Background and Objective</h3><p>We develop an efficient deep-learning based dual-domain reconstruction method for sparse-view CT reconstruction with small training parameters and comparable running time. We aim to investigate the model's capability and its clinical value by performing objective and subjective quality assessments using clinical CT projection data acquired on commercial scanners.</p></div><div><h3>Methods</h3><p>We designed two lightweight networks, namely Sino-Net and Img-Net, to restore the projection and image signal from the DD-Net reconstructed images in the projection and image domains, respectively. The proposed network has small training parameters and comparable running time among dual-domain based reconstruction networks and is easy to train (end-to-end). We prospectively collected clinical thoraco-abdominal CT projection data acquired on a Siemens Biograph 128 Edge CT scanner to train and validate the proposed network. Further, we quantitatively evaluated the CT Hounsfield unit (HU) values on 21 organs and anatomic structures, such as the liver, aorta, and ribcage. We also analyzed the noise properties and compared the signal-to-noise ratio (SNR) and the contrast-to-noise ratio (CNR) of the reconstructed images. Besides, two radiologists conducted the subjective qualitative evaluation including the confidence and conspicuity of anatomic structures, and the overall image quality using a 1–5 likert scoring system.</p></div><div><h3>Results</h3><p>Objective and subjective evaluation showed that the proposed algorithm achieves competitive results in eliminating noise and artifacts, restoring fine structure details, and recovering edges and contours of anatomic structures using 384 views (1/6 sparse rate). The proposed method exhibited good computational cost performance on clinical projection data.</p></div><div><h3>Conclusion</h3><p>This work presents an efficient dual-domain learning network for sparse-view CT reconstruction on raw projection data from a commercial scanner. The study also provides insights for designing an organ-based image quality assessment pipeline for sparse-view reconstruction tasks, potentially benefiting organ-specific dose reduction by sparse-view imaging.</p></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":null,"pages":null},"PeriodicalIF":4.9,"publicationDate":"2024-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0169260724003699/pdfft?md5=bb13bc4e96816cac1e0ceb8e8d125a9c&pid=1-s2.0-S0169260724003699-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142020997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Ultrasound normalized cumulative residual entropy imaging: Theory, methodology, and application 超声归一化累积残余熵成像:理论、方法和应用
IF 4.9 2区 医学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2024-08-13 DOI: 10.1016/j.cmpb.2024.108374

Background and Objective:

Ultrasound information entropy imaging is an emerging quantitative ultrasound technique for characterizing local tissue scatterer concentrations and arrangements. However, the commonly used ultrasound Shannon entropy imaging based on histogram-derived discrete probability estimation suffers from the drawbacks of histogram settings dependence and unknown estimator performance. In this paper, we introduced the information-theoretic cumulative residual entropy (CRE) defined in a continuous distribution of cumulative distribution functions as a new entropy measure of ultrasound backscatter envelope uncertainty or complexity, and proposed ultrasound CRE imaging for tissue characterization.

Methods:

We theoretically analyzed the CRE for Rayleigh and Nakagami distributions and proposed a normalized CRE for characterizing scatterer distribution patterns. We proposed a method based on an empirical cumulative distribution function estimator and a trapezoidal numerical integration for estimating the normalized CRE from ultrasound backscatter envelope signals. We presented an ultrasound normalized CRE imaging scheme based on the normalized CRE estimator and the parallel computation technique. We also conducted theoretical analysis of the differential entropy which is an extension of the Shannon entropy to a continuous distribution, and introduced a method for ultrasound differential entropy estimation and imaging. Monte-Carlo simulation experiments were performed to evaluate the estimation accuracy of the normalized CRE and differential entropy estimators. Phantom simulation and clinical experiments were conducted to evaluate the performance of the proposed normalized CRE imaging in characterizing scatterer concentrations and hepatic steatosis (n = 204), respectively.

Results:

The theoretical normalized CRE for the Rayleigh distribution was π/4, corresponding to the case where there were 10 randomly distributed scatterers within the resolution cell of an ultrasound transducer. The theoretical normalized CRE for the Nakagami distribution decreased as the Nakagami parameter m increased, corresponding to that the ultrasound backscattered statistics varied from pre-Rayleigh to Rayleigh and to post-Rayleigh distributions. Monte-Carlo simulation experiments showed that the proposed normalized CRE and differential entropy estimators can produce a satisfying estimation accuracy even when the size of the test samples is small. Phantom simulation experiments showed that the proposed normalized CRE and differential entropy imaging can characterize scatterer concentrations. Clinical experiments showed that the proposed ultrasound normalized CRE imaging is capabl

背景和目的:超声信息熵成像是一种新兴的定量超声技术,用于描述局部组织散射体的浓度和排列。然而,常用的基于直方图离散概率估计的超声香农熵成像存在直方图设置依赖性和估计器性能未知的缺点。方法:我们从理论上分析了瑞利和中神分布的累积残差熵,并提出了用于描述散射体分布模式的归一化累积残差熵。我们提出了一种基于经验累积分布函数估计器和梯形数值积分的方法,用于从超声反向散射包络信号中估计归一化 CRE。我们提出了一种基于归一化 CRE 估计器和并行计算技术的超声归一化 CRE 成像方案。我们还对作为香农熵向连续分布扩展的微分熵进行了理论分析,并介绍了超声微分熵估计和成像方法。蒙特卡洛模拟实验评估了归一化 CRE 和差分熵估计器的估计精度。结果:瑞利分布的理论归一化 CRE 为 π/4,对应于超声换能器分辨单元内有≥10 个随机分布的散射体的情况。中神分布的理论归一化 CRE 随中神参数 m 的增大而减小,相当于超声波后向散射统计量从前雷利分布到雷利分布再到后雷利分布的变化。蒙特卡洛模拟实验表明,所提出的归一化 CRE 估算器和差分熵估算器即使在测试样本较小时也能产生令人满意的估算精度。幻影模拟实验表明,所提出的归一化 CRE 和差分熵成像可以表征散射体的浓度。临床实验表明,所提出的超声归一化 CRE 成像能够定量表征肝脏脂肪变性,性能优于超声差分熵成像,与超声香农熵成像和中神成像相当。结论:本研究阐明了超声归一化 CRE 的理论和方法,提出的超声归一化 CRE 可作为一种新的、灵活的定量超声包膜统计参数。建议的超声归一化 CRE 成像可应用于生物组织的量化表征。我们的代码将在 https://github.com/zhouzhuhuang 公开发布。
{"title":"Ultrasound normalized cumulative residual entropy imaging: Theory, methodology, and application","authors":"","doi":"10.1016/j.cmpb.2024.108374","DOIUrl":"10.1016/j.cmpb.2024.108374","url":null,"abstract":"<div><h3>Background and Objective:</h3><p>Ultrasound information entropy imaging is an emerging quantitative ultrasound technique for characterizing local tissue scatterer concentrations and arrangements. However, the commonly used ultrasound Shannon entropy imaging based on histogram-derived discrete probability estimation suffers from the drawbacks of histogram settings dependence and unknown estimator performance. In this paper, we introduced the information-theoretic cumulative residual entropy (CRE) defined in a continuous distribution of cumulative distribution functions as a new entropy measure of ultrasound backscatter envelope uncertainty or complexity, and proposed ultrasound CRE imaging for tissue characterization.</p></div><div><h3>Methods:</h3><p>We theoretically analyzed the CRE for Rayleigh and Nakagami distributions and proposed a normalized CRE for characterizing scatterer distribution patterns. We proposed a method based on an empirical cumulative distribution function estimator and a trapezoidal numerical integration for estimating the normalized CRE from ultrasound backscatter envelope signals. We presented an ultrasound normalized CRE imaging scheme based on the normalized CRE estimator and the parallel computation technique. We also conducted theoretical analysis of the differential entropy which is an extension of the Shannon entropy to a continuous distribution, and introduced a method for ultrasound differential entropy estimation and imaging. Monte-Carlo simulation experiments were performed to evaluate the estimation accuracy of the normalized CRE and differential entropy estimators. Phantom simulation and clinical experiments were conducted to evaluate the performance of the proposed normalized CRE imaging in characterizing scatterer concentrations and hepatic steatosis (<span><math><mi>n</mi></math></span> = 204), respectively.</p></div><div><h3>Results:</h3><p>The theoretical normalized CRE for the Rayleigh distribution was <span><math><mrow><msqrt><mrow><mi>π</mi></mrow></msqrt><mo>/</mo><mn>4</mn></mrow></math></span>, corresponding to the case where there were <span><math><mrow><mo>≥</mo><mn>10</mn></mrow></math></span> randomly distributed scatterers within the resolution cell of an ultrasound transducer. The theoretical normalized CRE for the Nakagami distribution decreased as the Nakagami parameter <span><math><mi>m</mi></math></span> increased, corresponding to that the ultrasound backscattered statistics varied from pre-Rayleigh to Rayleigh and to post-Rayleigh distributions. Monte-Carlo simulation experiments showed that the proposed normalized CRE and differential entropy estimators can produce a satisfying estimation accuracy even when the size of the test samples is small. Phantom simulation experiments showed that the proposed normalized CRE and differential entropy imaging can characterize scatterer concentrations. Clinical experiments showed that the proposed ultrasound normalized CRE imaging is capabl","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":null,"pages":null},"PeriodicalIF":4.9,"publicationDate":"2024-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141992820","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computer methods and programs in biomedicine
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1