首页 > 最新文献

Computer methods and programs in biomedicine最新文献

英文 中文
Lazy Resampling: Fast and information preserving preprocessing for deep learning 懒惰重采样:用于深度学习的快速信息保护预处理
IF 4.9 2区 医学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2024-09-19 DOI: 10.1016/j.cmpb.2024.108422

Background and Objective:

Preprocessing of data is a vital step for almost all deep learning workflows. In computer vision, manipulation of data intensity and spatial properties can improve network stability and can provide an important source of generalisation for deep neural networks. Models are frequently trained with preprocessing pipelines composed of many stages, but these pipelines come with a drawback; each stage that resamples the data costs time, degrades image quality, and adds bias to the output. Long pipelines can also be complex to design, especially in medical imaging, where cropping data early can cause significant artifacts.

Methods:

We present Lazy Resampling, a software that rephrases spatial preprocessing operations as a graphics pipeline. Rather than each transform individually modifying the data, the transforms generate transform descriptions that are composited together into a single resample operation wherever possible. This reduces pipeline execution time and, most importantly, limits signal degradation. It enables simpler pipeline design as crops and other operations become non-destructive. Lazy Resampling is designed in such a way that it provides the maximum benefit to users without requiring them to understand the underlying concepts or change the way that they build pipelines.

Results:

We evaluate Lazy Resampling by comparing traditional pipelines and the corresponding lazy resampling pipeline for the following tasks on Medical Segmentation Decathlon datasets. We demonstrate lower information loss in lazy pipelines vs. traditional pipelines. We demonstrate that Lazy Resampling can avoid catastrophic loss of semantic segmentation label accuracy occurring in traditional pipelines when passing labels through a pipeline and then back through the inverted pipeline. Finally, we demonstrate statistically significant improvements when training UNets for semantic segmentation.

Conclusion:

Lazy Resampling reduces the loss of information that occurs when running processing pipelines that traditionally have multiple resampling steps and enables researchers to build simpler pipelines by making operations such as rotation and cropping effectively non-destructive. It makes it possible to invert labels back through a pipeline without catastrophic loss of accuracy.
A reference implementation for Lazy Resampling can be found at https://github.com/KCL-BMEIS/LazyResampling. Lazy Resampling is being implemented as a core feature in MONAI, an open source python-based deep learning library for medical imaging, with a roadmap for a full integration.
背景与目标:数据预处理是几乎所有深度学习工作流程的重要步骤。在计算机视觉领域,对数据强度和空间属性的处理可以提高网络的稳定性,并为深度神经网络提供重要的泛化来源。模型通常通过由多个阶段组成的预处理流水线进行训练,但这些流水线有一个缺点:对数据进行重新采样的每个阶段都会耗费时间、降低图像质量并增加输出偏差。长流水线的设计也很复杂,尤其是在医学成像中,早期裁剪数据会造成严重的伪影。方法:我们介绍的这款软件 "懒惰重采样"(Lazy Resampling)将空间预处理操作重新表述为图形流水线。在可能的情况下,变换生成的变换描述会被合成到一个单一的重采样操作中,而不是每个变换单独修改数据。这不仅缩短了流水线的执行时间,更重要的是限制了信号衰减。由于裁剪和其他操作都是非破坏性的,因此可以简化流水线设计。结果:我们通过比较传统管道和相应的懒惰重采样管道,对懒惰重采样进行了评估,并在医疗分割十项全能数据集上完成了以下任务。我们证明,与传统管道相比,懒惰管道的信息损失更低。我们证明了懒惰重采样可以避免传统管道在通过管道传递标签后再通过反向管道传递标签时出现的语义分割标签准确性的灾难性损失。最后,我们证明了在训练语义分割的 UNets 时在统计学上的显著改进。结论:懒惰重采样减少了在运行传统上具有多个重采样步骤的处理流水线时发生的信息损失,并通过使旋转和裁剪等操作有效地非破坏性,使研究人员能够构建更简单的流水线。懒惰重采样的参考实现可在 https://github.com/KCL-BMEIS/LazyResampling 上找到。懒惰重采样正在作为核心功能在 MONAI 中实现,MONAI 是一个基于 python 的开源深度学习库,用于医学成像,其路线图是实现全面集成。
{"title":"Lazy Resampling: Fast and information preserving preprocessing for deep learning","authors":"","doi":"10.1016/j.cmpb.2024.108422","DOIUrl":"10.1016/j.cmpb.2024.108422","url":null,"abstract":"<div><h3>Background and Objective:</h3><div>Preprocessing of data is a vital step for almost all deep learning workflows. In computer vision, manipulation of data intensity and spatial properties can improve network stability and can provide an important source of generalisation for deep neural networks. Models are frequently trained with preprocessing pipelines composed of many stages, but these pipelines come with a drawback; each stage that resamples the data costs time, degrades image quality, and adds bias to the output. Long pipelines can also be complex to design, especially in medical imaging, where cropping data early can cause significant artifacts.</div></div><div><h3>Methods:</h3><div>We present Lazy Resampling, a software that rephrases spatial preprocessing operations as a graphics pipeline. Rather than each transform individually modifying the data, the transforms generate transform descriptions that are composited together into a single resample operation wherever possible. This reduces pipeline execution time and, most importantly, limits signal degradation. It enables simpler pipeline design as crops and other operations become non-destructive. Lazy Resampling is designed in such a way that it provides the maximum benefit to users without requiring them to understand the underlying concepts or change the way that they build pipelines.</div></div><div><h3>Results:</h3><div>We evaluate Lazy Resampling by comparing traditional pipelines and the corresponding lazy resampling pipeline for the following tasks on Medical Segmentation Decathlon datasets. We demonstrate lower information loss in lazy pipelines vs. traditional pipelines. We demonstrate that Lazy Resampling can avoid catastrophic loss of semantic segmentation label accuracy occurring in traditional pipelines when passing labels through a pipeline and then back through the inverted pipeline. Finally, we demonstrate statistically significant improvements when training UNets for semantic segmentation.</div></div><div><h3>Conclusion:</h3><div>Lazy Resampling reduces the loss of information that occurs when running processing pipelines that traditionally have multiple resampling steps and enables researchers to build simpler pipelines by making operations such as rotation and cropping effectively non-destructive. It makes it possible to invert labels back through a pipeline without catastrophic loss of accuracy.</div><div>A reference implementation for Lazy Resampling can be found at <span><span>https://github.com/KCL-BMEIS/LazyResampling</span><svg><path></path></svg></span>. Lazy Resampling is being implemented as a core feature in MONAI, an open source python-based deep learning library for medical imaging, with a roadmap for a full integration.</div></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":null,"pages":null},"PeriodicalIF":4.9,"publicationDate":"2024-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142421046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A robust myoelectric pattern recognition framework based on individual motor unit activities against electrode array shifts 基于单个运动单元活动与电极阵列偏移的鲁棒性肌电模式识别框架
IF 4.9 2区 医学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2024-09-19 DOI: 10.1016/j.cmpb.2024.108434

Background and objective

Electrode shift is always one of the critical factors to compromise the performance of myoelectric pattern recognition (MPR) based on surface electromyogram (SEMG). However, current studies focused on the global features of SEMG signals to mitigate this issue but it is just an oversimplified description of the human movements without incorporating microscopic neural drive information. The objective of this work is to develop a novel method for calibrating the electrode array shifts toward achieving robust MPR, leveraging individual motor unit (MU) activities obtained through advanced SEMG decomposition.

Methods

All of the MUs from decomposition of SEMG data recorded at the original electrode array position were first initialized to train a neural network for pattern recognition. A part of decomposed MUs could be tracked and paired with MUs obtained at the original position based on spatial distribution of their MUAP waveforms, so as to determine the shift vector (describing both the orientation and distance of the shift) implicated consistently by these multiple MU pairs. Given the known shift vector, the features of the after-shift decomposed MUs were corrected accordingly and then fed into the network to finalize the MPR task. The performance of the proposed method was evaluated with data recorded by a 16 × 8 electrode array placed over the finger extensor muscles of 8 subjects performing 10 finger movement patterns.

Results

The proposed method achieved a shift detection accuracy of 100 % and a pattern recognition accuracy approximating to 100 %, significantly outperforming the conventional methods with lower shift detection accuracies and lower pattern recognition accuracies (p < 0.05).

Conclusions

Our method demonstrated the feasibility of using decomposed MUAP waveforms’ spatial distributions to calibrate electrode shift. This study provides a new tool to enhance the robustness of myoelectric control systems via microscopic neural drive information at an individual MU level.
背景和目的电极偏移始终是影响基于表面肌电图(SEMG)的肌电模式识别(MPR)性能的关键因素之一。然而,目前的研究侧重于 SEMG 信号的全局特征来缓解这一问题,但这只是对人体运动的一种过于简化的描述,没有纳入微观神经驱动信息。这项工作的目的是开发一种校准电极阵列移动的新方法,利用通过高级 SEMG 分解获得的单个运动单元(MU)活动实现稳健的 MPR。分解后的部分 MU 可根据其 MUAP 波形的空间分布进行跟踪,并与在原始位置获得的 MU 配对,从而确定这些多 MU 配对一致牵连的移位向量(描述移位的方向和距离)。根据已知的移位矢量,对移位后分解的 MU 的特征进行相应修正,然后输入网络,最终完成 MPR 任务。通过对 8 名受试者手指伸肌上 16 × 8 电极阵列记录的数据进行评估,评估了所提方法的性能。结果所提方法的移位检测准确率达到 100%,模式识别准确率接近 100%,明显优于移位检测准确率较低和模式识别准确率较低的传统方法(p < 0.05)。这项研究提供了一种新工具,通过单个 MU 水平的微观神经驱动信息来增强肌电控制系统的鲁棒性。
{"title":"A robust myoelectric pattern recognition framework based on individual motor unit activities against electrode array shifts","authors":"","doi":"10.1016/j.cmpb.2024.108434","DOIUrl":"10.1016/j.cmpb.2024.108434","url":null,"abstract":"<div><h3>Background and objective</h3><div>Electrode shift is always one of the critical factors to compromise the performance of myoelectric pattern recognition (MPR) based on surface electromyogram (SEMG). However, current studies focused on the global features of SEMG signals to mitigate this issue but it is just an oversimplified description of the human movements without incorporating microscopic neural drive information. The objective of this work is to develop a novel method for calibrating the electrode array shifts toward achieving robust MPR, leveraging individual motor unit (MU) activities obtained through advanced SEMG decomposition.</div></div><div><h3>Methods</h3><div>All of the MUs from decomposition of SEMG data recorded at the original electrode array position were first initialized to train a neural network for pattern recognition. A part of decomposed MUs could be tracked and paired with MUs obtained at the original position based on spatial distribution of their MUAP waveforms, so as to determine the shift vector (describing both the orientation and distance of the shift) implicated consistently by these multiple MU pairs. Given the known shift vector, the features of the after-shift decomposed MUs were corrected accordingly and then fed into the network to finalize the MPR task. The performance of the proposed method was evaluated with data recorded by a 16 × 8 electrode array placed over the finger extensor muscles of 8 subjects performing 10 finger movement patterns.</div></div><div><h3>Results</h3><div>The proposed method achieved a shift detection accuracy of 100 % and a pattern recognition accuracy approximating to 100 %, significantly outperforming the conventional methods with lower shift detection accuracies and lower pattern recognition accuracies (<em>p</em> &lt; 0.05).</div></div><div><h3>Conclusions</h3><div>Our method demonstrated the feasibility of using decomposed MUAP waveforms’ spatial distributions to calibrate electrode shift. This study provides a new tool to enhance the robustness of myoelectric control systems via microscopic neural drive information at an individual MU level.</div></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":null,"pages":null},"PeriodicalIF":4.9,"publicationDate":"2024-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142326264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Decoding motor imagery loaded on steady-state somatosensory evoked potential based on complex task-related component analysis 基于复杂任务相关成分分析的稳态体感诱发电位运动意象解码
IF 4.9 2区 医学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2024-09-19 DOI: 10.1016/j.cmpb.2024.108425

Background and objective

Motor Imagery (MI) recognition is one of the most critical decoding problems in brain- computer interface field. Combined with the steady-state somatosensory evoked potential (MI-SSSEP), this new paradigm can achieve higher recognition accuracy than the traditional MI paradigm. Typical algorithms do not fully consider the characteristics of MI-SSSEP signals. Developing an algorithm that fully captures the paradigm's characteristics to reduce false triggering rate is the new step in improving performance.

Methods

The idea to use complex signal task-related component analysis (cTRCA) algorithm for spatial filtering processing has been proposed in this paper according to the features of SSSEP signal. In this research, it's proved from the analysis of simulation signals that task-related component analysis (TRCA) as typical method is affected when the response between stimuli has reduced correlation and the proposed algorithm can effectively overcome this problem. The experimental data under the MI-SSSEP paradigm have been used to identify right-handed target tasks and three unique interference tasks are used to test the false triggering rate. cTRCA demonstrates superior performance as confirmed by the Wilcoxon signed-rank test.

Results

The recognition algorithm of cTRCA combined with mutual information-based best individual feature (MIBIF) and minimum distance to mean (MDM) can obtain AUC value up to 0.89, which is much higher than traditional algorithm common spatial pattern (CSP) combined with support vector machine (SVM) (the average AUC value is 0.77, p < 0.05). Compared to CSP+SVM, this algorithm model reduced the false triggering rate from 38.69 % to 20.74 % (p < 0.001).

Conclusions

The research prove that TRCA is influenced by MI-SSSEP signals. The results further prove that the motor imagery task in the new paradigm MI-SSSEP causes the phase change in evoked potential. and the cTRCA algorithm based on such phase change is more suitable for this hybrid paradigm and more conducive to decoding the motor imagery task and reducing false triggering rate.
背景和目的运动想象(MI)识别是脑-计算机接口领域最关键的解码问题之一。与稳态体感诱发电位(MI-SSSEP)相结合,这一新范式可获得比传统 MI 范式更高的识别准确率。典型的算法没有充分考虑 MI-SSSEP 信号的特点。方法本文根据 SSSEP 信号的特点,提出了使用复杂信号任务相关分量分析(cTRCA)算法进行空间滤波处理的想法。研究通过对模拟信号的分析证明,任务相关成分分析法(TRCA)作为一种典型的方法,在刺激物之间的响应相关性降低时会受到影响,而本文提出的算法可以有效克服这一问题。在 MI-SSSEP 范式下使用实验数据识别右手目标任务,并使用三个独特的干扰任务测试误触发率,经 Wilcoxon 符号秩检验证实,cTRCA 表现出更优越的性能。结果 cTRCA 与基于互信息的最佳个体特征(MIBIF)和最小平均距离(MDM)相结合的识别算法可获得高达 0.89 的 AUC 值,远高于与支持向量机(SVM)相结合的传统算法普通空间模式(CSP)(平均 AUC 值为 0.77,p <0.05)。与 CSP+SVM 相比,该算法模型将误触发率从 38.69 % 降至 20.74 %(p < 0.001)。结果进一步证明,新范式 MI-SSSEP 中的运动想象任务会引起诱发电位的相位变化,而基于这种相位变化的 cTRCA 算法更适合这种混合范式,更有利于解码运动想象任务和降低误触发率。
{"title":"Decoding motor imagery loaded on steady-state somatosensory evoked potential based on complex task-related component analysis","authors":"","doi":"10.1016/j.cmpb.2024.108425","DOIUrl":"10.1016/j.cmpb.2024.108425","url":null,"abstract":"<div><h3>Background and objective</h3><div>Motor Imagery (MI) recognition is one of the most critical decoding problems in brain- computer interface field. Combined with the steady-state somatosensory evoked potential (MI-SSSEP), this new paradigm can achieve higher recognition accuracy than the traditional MI paradigm. Typical algorithms do not fully consider the characteristics of MI-SSSEP signals. Developing an algorithm that fully captures the paradigm's characteristics to reduce false triggering rate is the new step in improving performance.</div></div><div><h3>Methods</h3><div>The idea to use complex signal task-related component analysis (cTRCA) algorithm for spatial filtering processing has been proposed in this paper according to the features of SSSEP signal. In this research, it's proved from the analysis of simulation signals that task-related component analysis (TRCA) as typical method is affected when the response between stimuli has reduced correlation and the proposed algorithm can effectively overcome this problem. The experimental data under the MI-SSSEP paradigm have been used to identify right-handed target tasks and three unique interference tasks are used to test the false triggering rate. cTRCA demonstrates superior performance as confirmed by the Wilcoxon signed-rank test.</div></div><div><h3>Results</h3><div>The recognition algorithm of cTRCA combined with mutual information-based best individual feature (MIBIF) and minimum distance to mean (MDM) can obtain AUC value up to 0.89, which is much higher than traditional algorithm common spatial pattern (CSP) combined with support vector machine (SVM) (the average AUC value is 0.77, <em>p</em> &lt; 0.05). Compared to CSP+SVM, this algorithm model reduced the false triggering rate from 38.69 % to 20.74 % (<em>p</em> &lt; 0.001).</div></div><div><h3>Conclusions</h3><div>The research prove that TRCA is influenced by MI-SSSEP signals. The results further prove that the motor imagery task in the new paradigm MI-SSSEP causes the phase change in evoked potential. and the cTRCA algorithm based on such phase change is more suitable for this hybrid paradigm and more conducive to decoding the motor imagery task and reducing false triggering rate.</div></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":null,"pages":null},"PeriodicalIF":4.9,"publicationDate":"2024-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142315333","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Does GLP-1 cause post-bariatric hypoglycemia: ‘Computer says no’ GLP-1 是否会导致减肥后低血糖:"计算机说不会
IF 4.9 2区 医学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2024-09-19 DOI: 10.1016/j.cmpb.2024.108424

Background and Objective:

Patients who underwent Roux-en-Y Gastric Bypass surgery for treatment of obesity or diabetes can suffer from post-bariatric hypoglycemia (PBH). It has been assumed that PBH is caused by increased levels of the hormone GLP-1. In this research, we elucidate the role of GLP-1 in PBH with a physiology-based mathematical model.

Methods:

The Eindhoven Diabetes Simulator (EDES) model, simulating postprandial glucose homeostasis, was adapted to include the effect of GLP-1 on insulin secretion. Parameter sensitivity analysis was used to identify parameters that could cause PBH. Virtual patient models were created by defining sets of models parameters based on 63 participants from the HypoBaria study cohort, before and one year after bariatric surgery.

Results:

Simulations with the virtual patient models showed that glycemic excursions can be correctly simulated for the study population, despite heterogeneity in the glucose, insulin and GLP-1 data. Sensitivity analysis showed that GLP-1 stimulated insulin secretion alone was not able to cause PBH. Instead, analyses showed the increased transit speed of the ingested food resulted in quick and increased glucose absorption in the gut after surgery, which in turn induced postprandial glycemic dips. Furthermore, according to the model post-bariatric increased rate of glucose absorption in combination with different levels of insulin sensitivity can result in PBH.

Conclusions:

Our model findings implicate that if initial rapid improvement in insulin sensitivity after gastric bypass surgery is followed by a more gradual decrease in insulin sensitivity, this may result in the emergence of PBH after prolonged time (months to years after surgery).
背景和目的:为治疗肥胖症或糖尿病而接受 Roux-en-Y 胃旁路手术的患者可能会出现减肥后低血糖症(PBH)。一般认为,PBH 是由激素 GLP-1 水平升高引起的。方法:埃因霍温糖尿病模拟器(EDES)模型模拟餐后葡萄糖稳态,并将 GLP-1 对胰岛素分泌的影响纳入其中。通过参数敏感性分析确定了可能导致 PBH 的参数。结果:虚拟患者模型的模拟结果表明,尽管血糖、胰岛素和 GLP-1 数据存在异质性,但仍能正确模拟研究人群的血糖偏移。敏感性分析表明,GLP-1 单独刺激胰岛素分泌并不能导致 PBH。相反,分析表明,摄入食物的转运速度加快导致术后肠道对葡萄糖的吸收加快和增加,进而诱发餐后血糖骤降。结论:我们的模型研究结果表明,如果胃旁路手术后胰岛素敏感性最初得到快速改善,随后胰岛素敏感性逐渐下降,这可能会在术后较长时间(术后数月至数年)后导致 PBH 的出现。
{"title":"Does GLP-1 cause post-bariatric hypoglycemia: ‘Computer says no’","authors":"","doi":"10.1016/j.cmpb.2024.108424","DOIUrl":"10.1016/j.cmpb.2024.108424","url":null,"abstract":"<div><h3>Background and Objective:</h3><div>Patients who underwent Roux-en-Y Gastric Bypass surgery for treatment of obesity or diabetes can suffer from post-bariatric hypoglycemia (PBH). It has been assumed that PBH is caused by increased levels of the hormone GLP-1. In this research, we elucidate the role of GLP-1 in PBH with a physiology-based mathematical model.</div></div><div><h3>Methods:</h3><div>The Eindhoven Diabetes Simulator (EDES) model, simulating postprandial glucose homeostasis, was adapted to include the effect of GLP-1 on insulin secretion. Parameter sensitivity analysis was used to identify parameters that could cause PBH. Virtual patient models were created by defining sets of models parameters based on 63 participants from the HypoBaria study cohort, before and one year after bariatric surgery.</div></div><div><h3>Results:</h3><div>Simulations with the virtual patient models showed that glycemic excursions can be correctly simulated for the study population, despite heterogeneity in the glucose, insulin and GLP-1 data. Sensitivity analysis showed that GLP-1 stimulated insulin secretion alone was not able to cause PBH. Instead, analyses showed the increased transit speed of the ingested food resulted in quick and increased glucose absorption in the gut after surgery, which in turn induced postprandial glycemic dips. Furthermore, according to the model post-bariatric increased rate of glucose absorption in combination with different levels of insulin sensitivity can result in PBH.</div></div><div><h3>Conclusions:</h3><div>Our model findings implicate that if initial rapid improvement in insulin sensitivity after gastric bypass surgery is followed by a more gradual decrease in insulin sensitivity, this may result in the emergence of PBH after prolonged time (months to years after surgery).</div></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":null,"pages":null},"PeriodicalIF":4.9,"publicationDate":"2024-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142319227","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
NecroGlobalGCN: Integrating micronecrosis information in HCC prognosis prediction via graph convolutional neural networks NecroGlobalGCN:通过图卷积神经网络在HCC预后预测中整合微坏死信息。
IF 4.9 2区 医学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2024-09-19 DOI: 10.1016/j.cmpb.2024.108435

Background and Objective

Hepatocellular carcinoma (HCC) ranks fourth in cancer mortality, underscoring the importance of accurate prognostic predictions to improve postoperative survival rates in patients. Although micronecrosis has been shown to have high prognostic value in HCC, its application in clinical prognosis prediction requires specialized knowledge and complex calculations, which poses challenges for clinicians. It would be of interest to develop a model to help clinicians make full use of micronecrosis to assess patient survival.

Methods

To address these challenges, we propose a HCC prognosis prediction model that integrates pathological micronecrosis information through Graph Convolutional Neural Networks (GCN). This approach enables GCN to utilize micronecrosis, which has been shown to be highly correlated with prognosis, thereby significantly enhancing prognostic stratification quality. We developed our model using 3622 slides from 752 patients with primary HCC from the FAH-ZJUMS dataset and conducted internal and external validations on the FAH-ZJUMS and TCGA-LIHC datasets, respectively.

Results

Our method outperformed the baseline by 8.18% in internal validation and 9.02% in external validations. Overall, this paper presents a deep learning research paradigm that integrates HCC micronecrosis, enhancing both the accuracy and interpretability of prognostic predictions, with potential applicability to other pathological prognostic markers.

Conclusions

This study proposes a composite GCN prognostic model that integrates information on HCC micronecrosis, collecting large dataset of HCC histopathological images. This approach could assist clinicians in analyzing HCC patient survival and precisely locating and visualizing necrotic tissues that affect prognosis. Following the research paradigm outlined in this paper, other prognostic biomarker integration models with GCN could be developed, significantly enhancing the predictive performance and interpretability of prognostic model.
背景和目的:肝细胞癌(HCC)在癌症死亡率中排名第四,这说明准确预测预后对提高患者术后生存率的重要性。虽然微坏死已被证明在 HCC 中具有很高的预后价值,但将其应用于临床预后预测需要专业知识和复杂的计算,这给临床医生带来了挑战。开发一个模型来帮助临床医生充分利用微坏死来评估患者的存活率将是一个很有意义的事情:为了应对这些挑战,我们提出了一种通过图卷积神经网络(GCN)整合病理微坏死信息的 HCC 预后预测模型。这种方法使 GCN 能够利用已被证明与预后高度相关的微坏死,从而显著提高预后分层的质量。我们使用来自FAH-ZJUMS数据集的752名原发性HCC患者的3622张切片开发了我们的模型,并分别在FAH-ZJUMS和TCGA-LIHC数据集上进行了内部和外部验证:在内部验证和外部验证中,我们的方法分别比基线方法高出 8.18% 和 9.02%。总之,本文提出了一种整合 HCC 微坏死的深度学习研究范式,提高了预后预测的准确性和可解释性,并有可能适用于其他病理预后标志物:本研究提出了一种复合 GCN 预后模型,该模型整合了 HCC 微坏死信息,收集了大量 HCC 组织病理学图像数据集。这种方法可以帮助临床医生分析 HCC 患者的存活率,并精确定位和观察影响预后的坏死组织。按照本文概述的研究范例,还可以开发出其他与 GCN 相结合的预后生物标志物模型,从而大大提高预后模型的预测性能和可解释性。
{"title":"NecroGlobalGCN: Integrating micronecrosis information in HCC prognosis prediction via graph convolutional neural networks","authors":"","doi":"10.1016/j.cmpb.2024.108435","DOIUrl":"10.1016/j.cmpb.2024.108435","url":null,"abstract":"<div><h3>Background and Objective</h3><div>Hepatocellular carcinoma (HCC) ranks fourth in cancer mortality, underscoring the importance of accurate prognostic predictions to improve postoperative survival rates in patients. Although micronecrosis has been shown to have high prognostic value in HCC, its application in clinical prognosis prediction requires specialized knowledge and complex calculations, which poses challenges for clinicians. It would be of interest to develop a model to help clinicians make full use of micronecrosis to assess patient survival.</div></div><div><h3>Methods</h3><div>To address these challenges, we propose a HCC prognosis prediction model that integrates pathological micronecrosis information through Graph Convolutional Neural Networks (GCN). This approach enables GCN to utilize micronecrosis, which has been shown to be highly correlated with prognosis, thereby significantly enhancing prognostic stratification quality. We developed our model using 3622 slides from 752 patients with primary HCC from the FAH-ZJUMS dataset and conducted internal and external validations on the FAH-ZJUMS and TCGA-LIHC datasets, respectively.</div></div><div><h3>Results</h3><div>Our method outperformed the baseline by 8.18% in internal validation and 9.02% in external validations. Overall, this paper presents a deep learning research paradigm that integrates HCC micronecrosis, enhancing both the accuracy and interpretability of prognostic predictions, with potential applicability to other pathological prognostic markers.</div></div><div><h3>Conclusions</h3><div>This study proposes a composite GCN prognostic model that integrates information on HCC micronecrosis, collecting large dataset of HCC histopathological images. This approach could assist clinicians in analyzing HCC patient survival and precisely locating and visualizing necrotic tissues that affect prognosis. Following the research paradigm outlined in this paper, other prognostic biomarker integration models with GCN could be developed, significantly enhancing the predictive performance and interpretability of prognostic model.</div></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":null,"pages":null},"PeriodicalIF":4.9,"publicationDate":"2024-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142364650","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Combining clinical and molecular data for personalized treatment in acute myeloid leukemia: A machine learning approach 结合临床和分子数据对急性髓性白血病进行个性化治疗:机器学习方法
IF 4.9 2区 医学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2024-09-18 DOI: 10.1016/j.cmpb.2024.108432

Background and Objective

The standard of care in Acute Myeloid Leukemia patients has remained essentially unchanged for nearly 40 years. Due to the complicated mutational patterns within and between individual patients and a lack of targeted agents for most mutational events, implementing individualized treatment for AML has proven difficult. We reanalysed the BeatAML dataset employing Machine Learning algorithms. The BeatAML project entails patients extensively characterized at the molecular and clinical levels and linked to drug sensitivity outputs. Our approach capitalizes on the molecular and clinical data provided by the BeatAML dataset to predict the ex vivo drug sensitivity for the 122 drugs evaluated by the project.

Methods

We utilized ElasticNet, which produces fully interpretable models, in combination with a two-step training protocol that allowed us to narrow down computations. We automated the genes’ filtering step by employing two metrics, and we evaluated all possible data combinations to identify the best training configuration settings per drug.

Results

We report a Pearson correlation across all drugs of 0.36 when clinical and RNA sequencing data were combined, with the best-performing models reaching a Pearson correlation of 0.67. When we trained using the datasets in isolation, we noted that RNA Sequencing data (Pearson: 0.36) attained three times the predictive power of whole exome sequencing data (Pearson: 0.11), with clinical data falling somewhere in between (Pearson 0.26). Lastly, we present a paradigm of clinical significance. We used our models’ prediction as a drug sensitivity score to rank an individual's expected response to treatment. We identified 78 patients out of 89 (88 %) that the proposed drug was more potent than the administered one based on their ex vivo drug sensitivity data.

Conclusions

In conclusion, our reanalysis of the BeatAML dataset using Machine Learning algorithms demonstrates the potential for individualized treatment prediction in Acute Myeloid Leukemia patients, addressing the longstanding challenge of treatment personalization in this disease. By leveraging molecular and clinical data, our approach yields promising correlations between predicted drug sensitivity and actual responses, highlighting a significant step forward in improving therapeutic outcomes for AML patients.
背景和目的近 40 年来,急性髓性白血病患者的治疗标准基本未变。由于个体患者内部和之间存在复杂的突变模式,而且缺乏针对大多数突变事件的靶向药物,因此很难对急性髓细胞白血病患者实施个体化治疗。我们采用机器学习算法重新分析了 BeatAML 数据集。BeatAML项目包括在分子和临床水平上对患者进行广泛特征描述,并与药物敏感性输出相关联。我们的方法利用 BeatAML 数据集提供的分子和临床数据,预测该项目评估的 122 种药物的体内外药物敏感性。方法我们利用 ElasticNet(可生成完全可解释的模型),结合两步训练方案,缩小了计算范围。我们采用两个指标自动完成了基因筛选步骤,并评估了所有可能的数据组合,以确定每种药物的最佳训练配置设置。结果我们发现,当临床数据和 RNA 测序数据相结合时,所有药物的皮尔逊相关性为 0.36,表现最好的模型的皮尔逊相关性达到了 0.67。当我们使用单独的数据集进行训练时,我们注意到 RNA 测序数据(Pearson:0.36)的预测能力是全外显子组测序数据(Pearson:0.11)的三倍,而临床数据则介于两者之间(Pearson 0.26)。最后,我们提出了一个临床意义范例。我们将模型的预测结果作为药物敏感性评分,对个体的预期治疗反应进行排序。结论总之,我们使用机器学习算法重新分析了 BeatAML 数据集,证明了对急性髓性白血病患者进行个体化治疗预测的潜力,解决了该疾病长期以来治疗个性化的难题。通过利用分子和临床数据,我们的方法在预测的药物敏感性和实际反应之间产生了良好的相关性,在改善急性髓细胞白血病患者的治疗效果方面迈出了重要的一步。
{"title":"Combining clinical and molecular data for personalized treatment in acute myeloid leukemia: A machine learning approach","authors":"","doi":"10.1016/j.cmpb.2024.108432","DOIUrl":"10.1016/j.cmpb.2024.108432","url":null,"abstract":"<div><h3>Background and Objective</h3><div>The standard of care in <em>Acute Myeloid Leukemia</em> patients has remained essentially unchanged for nearly 40 years. Due to the complicated mutational patterns within and between individual patients and a lack of targeted agents for most mutational events, implementing individualized treatment for AML has proven difficult. We reanalysed the <em>BeatAML dataset</em> employing <em>Machine Learning algorithms</em>. The BeatAML project entails patients extensively characterized at the molecular and clinical levels and linked to drug sensitivity outputs. Our approach capitalizes on the molecular and clinical data provided by the <em>BeatAML dataset</em> to predict the <em>ex vivo</em> drug sensitivity for the 122 drugs evaluated by the project.</div></div><div><h3>Methods</h3><div>We utilized ElasticNet, which produces fully interpretable models, in combination with a two-step training protocol that allowed us to narrow down computations. We automated the genes’ filtering step by employing two metrics, and we evaluated all possible data combinations to identify the best training configuration settings per drug.</div></div><div><h3>Results</h3><div>We report a Pearson correlation across all drugs of 0.36 when clinical and RNA sequencing data were combined, with the best-performing models reaching a Pearson correlation of 0.67. When we trained using the datasets in isolation, we noted that RNA Sequencing data (Pearson: 0.36) attained three times the predictive power of whole exome sequencing data (Pearson: 0.11), with clinical data falling somewhere in between (Pearson 0.26). Lastly, we present a paradigm of clinical significance. We used our models’ prediction as a <em>drug sensitivity score</em> to rank an individual's expected response to treatment. We identified 78 patients out of 89 (88 %) that the proposed drug was more potent than the administered one based on their <em>ex vivo</em> drug sensitivity data.</div></div><div><h3>Conclusions</h3><div>In conclusion, our reanalysis of the BeatAML dataset using Machine Learning algorithms demonstrates the potential for individualized treatment prediction in Acute Myeloid Leukemia patients, addressing the longstanding challenge of treatment personalization in this disease. By leveraging molecular and clinical data, our approach yields promising correlations between predicted drug sensitivity and actual responses, highlighting a significant step forward in improving therapeutic outcomes for AML patients.</div></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":null,"pages":null},"PeriodicalIF":4.9,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0169260724004255/pdfft?md5=2452e4d62b109d4cc19e47265be2ee8e&pid=1-s2.0-S0169260724004255-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142310716","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Development, calibration and validation of impact-specific cervical spine models: A novel approach using hybrid multibody and finite-element methods 特定撞击颈椎模型的开发、校准和验证:使用混合多体和有限元方法的新方法
IF 4.9 2区 医学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2024-09-18 DOI: 10.1016/j.cmpb.2024.108430

Background and Objective:

Spinal cord injuries can have a severe impact on athletes’ or patients’ lives. High axial impact scenarios like tackling and scrummaging can cause hyperflexion and buckling of the cervical spine, which is often connected with bilateral facet dislocation. Typically, finite-element (FE) or musculoskeletal models are applied to investigate these scenarios, however, they have the drawbacks of high computational cost and lack of soft tissue information, respectively. Moreover, material properties of the involved tissues are commonly tested in quasi-static conditions, which do not accurately capture the mechanical behavior during impact scenarios. Thus, the aim of this study was to develop, calibrate and validate an approach for the creation of impact-specific hybrid, rigid body - finite-element spine models for high-dynamic axial impact scenarios.

Methods:

Five porcine cervical spine models were used to replicate in-vitro experiments to calibrate stiffness and damping parameters of the intervertebral joints by matching the kinematics of the in-vitro with the in-silico experiments. Afterwards, a five-fold cross-validation was conducted. Additionally, the von Mises stress of the lumped FE-discs was investigated during impact.

Results:

The results of the calibration and validation of our hybrid approach agree well with the in-vitro experiments. The stress maps of the lumped FE-discs showed that the highest stress of the most superior lumped disc was located anterior while the remaining lumped discs had their maximum in the posterior portion.

Conclusion:

Our hybrid method demonstrated the importance of impact-specific modeling. Overall, our hybrid modeling approach enhances the possibilities of identifying spine injury mechanisms by facilitating dynamic, impact-specific computational models.
背景和目的:脊髓损伤会严重影响运动员或患者的生活。高轴向冲击情景(如擒抱和铲球)可导致颈椎过度屈曲和屈曲,这通常与双侧关节面脱位有关。通常情况下,有限元(FE)或肌肉骨骼模型被用于研究这些情况,但它们分别存在计算成本高和缺乏软组织信息的缺点。此外,相关组织的材料特性通常在准静态条件下进行测试,无法准确捕捉撞击情况下的机械行为。方法:使用五个猪颈椎模型复制体外实验,通过匹配体外实验和室内实验的运动学参数来校准椎间关节的刚度和阻尼参数。之后,进行了五倍交叉验证。结果:我们的混合方法的校准和验证结果与体外实验结果非常吻合。块状 FE 盘的应力图显示,最上层块状盘的最大应力位于前部,而其余块状盘的最大应力位于后部。总之,我们的混合建模方法通过建立针对特定撞击的动态计算模型,提高了确定脊柱损伤机制的可能性。
{"title":"Development, calibration and validation of impact-specific cervical spine models: A novel approach using hybrid multibody and finite-element methods","authors":"","doi":"10.1016/j.cmpb.2024.108430","DOIUrl":"10.1016/j.cmpb.2024.108430","url":null,"abstract":"<div><h3>Background and Objective:</h3><div>Spinal cord injuries can have a severe impact on athletes’ or patients’ lives. High axial impact scenarios like tackling and scrummaging can cause hyperflexion and buckling of the cervical spine, which is often connected with bilateral facet dislocation. Typically, finite-element (FE) or musculoskeletal models are applied to investigate these scenarios, however, they have the drawbacks of high computational cost and lack of soft tissue information, respectively. Moreover, material properties of the involved tissues are commonly tested in quasi-static conditions, which do not accurately capture the mechanical behavior during impact scenarios. Thus, the aim of this study was to develop, calibrate and validate an approach for the creation of impact-specific hybrid, rigid body - finite-element spine models for high-dynamic axial impact scenarios.</div></div><div><h3>Methods:</h3><div>Five porcine cervical spine models were used to replicate in-vitro experiments to calibrate stiffness and damping parameters of the intervertebral joints by matching the kinematics of the in-vitro with the in-silico experiments. Afterwards, a five-fold cross-validation was conducted. Additionally, the von Mises stress of the lumped FE-discs was investigated during impact.</div></div><div><h3>Results:</h3><div>The results of the calibration and validation of our hybrid approach agree well with the in-vitro experiments. The stress maps of the lumped FE-discs showed that the highest stress of the most superior lumped disc was located anterior while the remaining lumped discs had their maximum in the posterior portion.</div></div><div><h3>Conclusion:</h3><div>Our hybrid method demonstrated the importance of impact-specific modeling. Overall, our hybrid modeling approach enhances the possibilities of identifying spine injury mechanisms by facilitating dynamic, impact-specific computational models.</div></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":null,"pages":null},"PeriodicalIF":4.9,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0169260724004231/pdfft?md5=ce1e473dd9a8e164ca33393fec592b86&pid=1-s2.0-S0169260724004231-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142310832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Patient-specific pulmonary venous flow characterization and its impact on left atrial appendage thrombosis in atrial fibrillation patients 心房颤动患者肺静脉血流特征及其对左心房阑尾血栓形成的影响
IF 4.9 2区 医学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2024-09-17 DOI: 10.1016/j.cmpb.2024.108428

Background

Cardioembolic strokes are commonly occurred in non-valvular atrial fibrillation (AF) patients, with over 90% of cases originating from clot in left atrial appendage (LAA), which is believed to be greatly related with hemodynamic characters. Numerical simulation is widely accepted in the hemodynamic analysis, and patient-specific boundaries are required for realistic numerical simulations.

Method

This paper firstly proposed a method that maps personalized pulmonary venous flow (PVF) by utilizing the volume changes of the left atrium (LA) over the cardiac cycle. Then we used data from patients with AF to investigate the correlation between PVF patterns and hemodynamics within the LAA. Meanwhile, we conducted a fluid-structure interaction analysis to assess the impact of velocity- and time-related PVF parameters on LAA hemodynamic characters.

Results

The analysis reveal that the ratio of systolic to diastolic peak velocity (VS/VD), and systolic velocity-time integral (VTI) showed a significant influence on LAA velocity in patients with atrial fibrillation, and the increases of velocity- and time-related parameters were found to be positively correlated with the blood update in the LAA.

Conclusions

This study established a method for mapping patient-specific PVF based on LA volume change, and evaluated the relationship between PVF parameters and thrombosis risk. The present work provides an insight from PVF characters to evaluate the risk of thrombus formation within LAA in patients with AF.
背景心血管栓塞性脑卒中常见于非瓣膜性心房颤动(房颤)患者,其中90%以上的病例源于左心房附壁(LAA)的血栓,这被认为与血流动力学特征有很大关系。本文首先提出了一种利用左心房(LA)在心动周期中的容积变化绘制个性化肺静脉流量(PVF)的方法。然后,我们利用房颤患者的数据研究了 PVF 模式与 LAA 内血流动力学之间的相关性。结果分析表明,收缩期与舒张期峰值速度之比(VS/VD)和收缩期速度-时间积分(VTI)对心房颤动患者的 LAA 速度有显著影响,速度和时间相关参数的增加与 LAA 的血液更新呈正相关。结论本研究建立了一种基于 LA 容积变化绘制患者特异性 PVF 的方法,并评估了 PVF 参数与血栓形成风险之间的关系。本研究提供了从 PVF 字符评估房颤患者 LAA 内血栓形成风险的见解。
{"title":"Patient-specific pulmonary venous flow characterization and its impact on left atrial appendage thrombosis in atrial fibrillation patients","authors":"","doi":"10.1016/j.cmpb.2024.108428","DOIUrl":"10.1016/j.cmpb.2024.108428","url":null,"abstract":"<div><h3>Background</h3><div>Cardioembolic strokes are commonly occurred in non-valvular atrial fibrillation (AF) patients, with over 90% of cases originating from clot in left atrial appendage (LAA), which is believed to be greatly related with hemodynamic characters. Numerical simulation is widely accepted in the hemodynamic analysis, and patient-specific boundaries are required for realistic numerical simulations.</div></div><div><h3>Method</h3><div>This paper firstly proposed a method that maps personalized pulmonary venous flow (PVF) by utilizing the volume changes of the left atrium (LA) over the cardiac cycle. Then we used data from patients with AF to investigate the correlation between PVF patterns and hemodynamics within the LAA. Meanwhile, we conducted a fluid-structure interaction analysis to assess the impact of velocity- and time-related PVF parameters on LAA hemodynamic characters.</div></div><div><h3>Results</h3><div>The analysis reveal that the ratio of systolic to diastolic peak velocity (<em>V</em><sub>S</sub>/<em>V</em><sub>D</sub>), and systolic velocity-time integral (VTI) showed a significant influence on LAA velocity in patients with atrial fibrillation, and the increases of velocity- and time-related parameters were found to be positively correlated with the blood update in the LAA.</div></div><div><h3>Conclusions</h3><div>This study established a method for mapping patient-specific PVF based on LA volume change, and evaluated the relationship between PVF parameters and thrombosis risk. The present work provides an insight from PVF characters to evaluate the risk of thrombus formation within LAA in patients with AF.</div></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":null,"pages":null},"PeriodicalIF":4.9,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142319228","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A novel GAN-based three-axis mutually supervised super-resolution reconstruction method for rectal cancer MR image 基于 GAN 的新型直肠癌 MR 图像三轴相互监督超分辨率重建方法
IF 4.9 2区 医学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2024-09-16 DOI: 10.1016/j.cmpb.2024.108426

Background and objective

This study aims to enhance the resolution in the axial direction of rectal cancer magnetic resonance (MR) imaging scans to improve the accuracy of visual interpretation and quantitative analysis. MR imaging is a critical technique for the diagnosis and treatment planning of rectal cancer. However, obtaining high-resolution MR images is both time-consuming and costly. As a result, many hospitals store only a limited number of slices, often leading to low-resolution MR images, particularly in the axial plane. Given the importance of image resolution in accurate assessment, these low-resolution images frequently lack the necessary detail, posing substantial challenges for both human experts and computer-aided diagnostic systems. Image super-resolution (SR), a technique developed to enhance image resolution, was originally applied to natural images. Its success has since led to its application in various other tasks, especially in the reconstruction of low-resolution MR images. However, most existing SR methods fail to account for all anatomical planes during reconstruction, leading to unsatisfactory results when applied to rectal cancer MR images.

Methods

In this paper, we propose a GAN-based three-axis mutually supervised super-resolution reconstruction method tailored for low-resolution rectal cancer MR images. Our approach involves performing one-dimensional (1D) intra-slice SR reconstruction along the axial direction for both the sagittal and coronal planes, coupled with inter-slice SR reconstruction based on slice synthesis in the axial direction. To further enhance the accuracy of super-resolution reconstruction, we introduce a consistency supervision mechanism across the reconstruction results of different axes, promoting mutual learning between each axis. A key innovation of our method is the introduction of Depth-GAN for synthesize intermediate slices in the axial plane, incorporating depth information and leveraging Generative Adversarial Networks (GANs) for this purpose. Additionally, we enhance the accuracy of intermediate slice synthesis by employing a combination of supervised and unsupervised interactive learning techniques throughout the process.

Results

We conducted extensive ablation studies and comparative analyses with existing methods to validate the effectiveness of our approach. On the test set from Shanxi Cancer Hospital, our method achieved a Peak Signal-to-Noise Ratio (PSNR) of 34.62 and a Structural Similarity Index (SSIM) of 96.34 %. These promising results demonstrate the superiority of our method.
背景和目的:本研究旨在提高直肠癌磁共振(MR)成像扫描的轴向分辨率,以提高视觉解读和定量分析的准确性。磁共振成像是直肠癌诊断和治疗规划的关键技术。然而,获取高分辨率磁共振图像既费时又费钱。因此,许多医院只存储有限数量的切片,往往导致 MR 图像分辨率较低,尤其是在轴向平面。鉴于图像分辨率对准确评估的重要性,这些低分辨率图像往往缺乏必要的细节,给人类专家和计算机辅助诊断系统带来了巨大挑战。图像超分辨率(SR)是为提高图像分辨率而开发的一种技术,最初应用于自然图像。它的成功使其被应用于各种其他任务,尤其是低分辨率磁共振图像的重建。然而,现有的大多数 SR 方法在重建过程中未能考虑到所有解剖平面,导致应用于直肠癌 MR 图像时效果不理想:本文提出了一种基于 GAN 的三轴相互监督超分辨率重建方法,适合低分辨率直肠癌 MR 图像。我们的方法包括沿轴向对矢状和冠状面进行一维(1D)片内 SR 重建,以及基于轴向切片合成的片间 SR 重建。为了进一步提高超分辨率重建的准确性,我们引入了不同轴重建结果的一致性监督机制,促进各轴之间的相互学习。我们方法的一个关键创新点是引入了深度-反向网络(Depth-GAN)来合成轴向的中间切片,将深度信息和生成式反向网络(GAN)结合在一起。此外,我们在整个过程中结合使用了监督和非监督交互式学习技术,从而提高了中间切片合成的准确性:我们进行了广泛的消融研究以及与现有方法的对比分析,以验证我们方法的有效性。在山西省肿瘤医院的测试集上,我们的方法达到了 34.62 的峰值信噪比(PSNR)和 96.34 % 的结构相似性指数(SSIM)。这些可喜的结果证明了我们方法的优越性。
{"title":"A novel GAN-based three-axis mutually supervised super-resolution reconstruction method for rectal cancer MR image","authors":"","doi":"10.1016/j.cmpb.2024.108426","DOIUrl":"10.1016/j.cmpb.2024.108426","url":null,"abstract":"<div><h3>Background and objective</h3><div>This study aims to enhance the resolution in the axial direction of rectal cancer magnetic resonance (MR) imaging scans to improve the accuracy of visual interpretation and quantitative analysis. MR imaging is a critical technique for the diagnosis and treatment planning of rectal cancer. However, obtaining high-resolution MR images is both time-consuming and costly. As a result, many hospitals store only a limited number of slices, often leading to low-resolution MR images, particularly in the axial plane. Given the importance of image resolution in accurate assessment, these low-resolution images frequently lack the necessary detail, posing substantial challenges for both human experts and computer-aided diagnostic systems. Image super-resolution (SR), a technique developed to enhance image resolution, was originally applied to natural images. Its success has since led to its application in various other tasks, especially in the reconstruction of low-resolution MR images. However, most existing SR methods fail to account for all anatomical planes during reconstruction, leading to unsatisfactory results when applied to rectal cancer MR images.</div></div><div><h3>Methods</h3><div>In this paper, we propose a GAN-based three-axis mutually supervised super-resolution reconstruction method tailored for low-resolution rectal cancer MR images. Our approach involves performing one-dimensional (1D) intra-slice SR reconstruction along the axial direction for both the sagittal and coronal planes, coupled with inter-slice SR reconstruction based on slice synthesis in the axial direction. To further enhance the accuracy of super-resolution reconstruction, we introduce a consistency supervision mechanism across the reconstruction results of different axes, promoting mutual learning between each axis. A key innovation of our method is the introduction of Depth-GAN for synthesize intermediate slices in the axial plane, incorporating depth information and leveraging Generative Adversarial Networks (GANs) for this purpose. Additionally, we enhance the accuracy of intermediate slice synthesis by employing a combination of supervised and unsupervised interactive learning techniques throughout the process.</div></div><div><h3>Results</h3><div>We conducted extensive ablation studies and comparative analyses with existing methods to validate the effectiveness of our approach. On the test set from Shanxi Cancer Hospital, our method achieved a Peak Signal-to-Noise Ratio (PSNR) of 34.62 and a Structural Similarity Index (SSIM) of 96.34 %. These promising results demonstrate the superiority of our method.</div></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":null,"pages":null},"PeriodicalIF":4.9,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142379216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Investigating the effect of brain atrophy on transcranial direct current stimulation: A computational study using ADNI dataset 调查脑萎缩对经颅直流电刺激的影响:使用 ADNI 数据集的计算研究。
IF 4.9 2区 医学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2024-09-16 DOI: 10.1016/j.cmpb.2024.108429

Background

Transcranial direct current stimulation (tDCS) is a non-invasive brain stimulation technique that uses weak electrical currents to modulate brain activity, thus potentially aiding the treatment of brain diseases. Although tDCS offers convenience, it yields inconsistent electric-field distributions among individuals. This inconsistency may be attributed to certain factors, such as brain atrophy. Brain atrophy is accompanied by increased cerebrospinal fluid (CSF) volume. Owing to the high electrical conductivity of CSF, its increased volume complicates current delivery to the brain, thus resulting in greater inter-subject variability.

Objective

We aim to investigate the differences in tDCS-induced electric fields between groups with different severities of brain atrophy.

Methods

We classified 180 magnetic resonance images into four groups based on the presence of Alzheimer's disease and sex. We used two montages, i.e., F-3 & Fp-2 and TP-9 & TP-10, to target the left rostral middle frontal gyrus and the hippocampus/amygdala complex, respectively. Differences between the groups in terms of regional volume variation, stimulation effect, and correlation were analyzed.

Results

Significant differences were observed in the geometrical variations of the CSF and two target regions. Electric fields induced by tDCS were similar in both sexes. Unique patterns were observed in each group in the correlation analysis.

Conclusion

Our findings show that factors such as brain atrophy affect the tDCS results and that the factors present complex relationships. Further studies are necessary to better understand the relationships between these factors and optimize tDCS as a therapeutic tool.
背景:经颅直流电刺激(transcranial direct current stimulation,tDCS)是一种非侵入性脑刺激技术,它利用微弱电流调节大脑活动,从而有可能帮助治疗脑部疾病。虽然 tDCS 带来了便利,但它产生的电场分布却因人而异。这种不一致性可能与某些因素有关,例如脑萎缩。脑萎缩伴随着脑脊液(CSF)容量的增加。由于脑脊液具有高导电性,其体积增大会使电流传递到大脑的过程变得复杂,从而导致受试者之间的差异增大:我们的目的是研究不同脑萎缩程度的群体在 tDCS 诱导的电场方面的差异:我们根据阿尔茨海默病的存在和性别将 180 张磁共振图像分为四组。我们使用了两种蒙太奇,即 F-3 和 Fp-2 以及 TP-9 和 TP-10,分别以左侧喙中额回和海马/杏仁核复合体为靶点。分析了各组在区域体积变化、刺激效果和相关性方面的差异:结果:观察到 CSF 和两个目标区域的几何变化存在显著差异。tDCS 诱导的电场在男女两性中相似。在相关性分析中,每个组都观察到了独特的模式:我们的研究结果表明,脑萎缩等因素会影响 tDCS 的结果,而且这些因素之间存在复杂的关系。为了更好地理解这些因素之间的关系并优化作为治疗工具的 tDCS,有必要开展进一步的研究。
{"title":"Investigating the effect of brain atrophy on transcranial direct current stimulation: A computational study using ADNI dataset","authors":"","doi":"10.1016/j.cmpb.2024.108429","DOIUrl":"10.1016/j.cmpb.2024.108429","url":null,"abstract":"<div><h3>Background</h3><div>Transcranial direct current stimulation (tDCS) is a non-invasive brain stimulation technique that uses weak electrical currents to modulate brain activity, thus potentially aiding the treatment of brain diseases. Although tDCS offers convenience, it yields inconsistent electric-field distributions among individuals. This inconsistency may be attributed to certain factors, such as brain atrophy. Brain atrophy is accompanied by increased cerebrospinal fluid (CSF) volume. Owing to the high electrical conductivity of CSF, its increased volume complicates current delivery to the brain, thus resulting in greater inter-subject variability.</div></div><div><h3>Objective</h3><div>We aim to investigate the differences in tDCS-induced electric fields between groups with different severities of brain atrophy.</div></div><div><h3>Methods</h3><div>We classified 180 magnetic resonance images into four groups based on the presence of Alzheimer's disease and sex. We used two montages, i.e., F-3 &amp; Fp-2 and TP-9 &amp; TP-10, to target the left rostral middle frontal gyrus and the hippocampus/amygdala complex, respectively. Differences between the groups in terms of regional volume variation, stimulation effect, and correlation were analyzed.</div></div><div><h3>Results</h3><div>Significant differences were observed in the geometrical variations of the CSF and two target regions. Electric fields induced by tDCS were similar in both sexes. Unique patterns were observed in each group in the correlation analysis.</div></div><div><h3>Conclusion</h3><div>Our findings show that factors such as brain atrophy affect the tDCS results and that the factors present complex relationships. Further studies are necessary to better understand the relationships between these factors and optimize tDCS as a therapeutic tool.</div></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":null,"pages":null},"PeriodicalIF":4.9,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142307292","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computer methods and programs in biomedicine
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1