首页 > 最新文献

Computers in biology and medicine最新文献

英文 中文
Using 3D point cloud and graph-based neural networks to improve the estimation of pulmonary function tests from chest CT 利用三维点云和基于图的神经网络改进胸部 CT 肺功能测试的估算。
IF 7 2区 医学 Q1 BIOLOGY Pub Date : 2024-09-27 DOI: 10.1016/j.compbiomed.2024.109192
Pulmonary function tests (PFTs) are important clinical metrics to measure the severity of interstitial lung disease for systemic sclerosis patients. However, PFTs cannot always be performed by spirometry if there is a risk of disease transmission or other contraindications. In addition, it is unclear how lung function is affected by changes in lung vessels. Therefore, convolution neural networks (CNNs) were previously proposed to estimate PFTs from chest CT scans (CNN-CT) and extracted vessels (CNN-Vessel). Due to GPU memory constraints, however, these networks used down-sampled images, which causes a loss of information on small vessels. Previous literature has indicated that detailed vessel information from CT scans can be helpful for PFT estimation. Therefore, this paper proposes to use a point cloud neural network (PNN-Vessel) and graph neural network (GNN-Vessel) to estimate PFTs from point cloud and graph-based representations of pulmonary vessel centerlines, respectively. After that, we combine different networks and perform multiple variable step-wise regression analysis to explore if vessel-based networks can contribute to the PFT estimation, in addition to CNN-CT. Results showed that both PNN-Vessel and GNN-Vessel outperformed CNN-Vessel, by 14% and 4%, respectively, when averaged across the intra-class correlation coefficient (ICC) scores of four PFTs metrics. In addition, compared to CNN-Vessel, PNN-Vessel used 30% of training time (1.1 h) and 7% parameters (2.1 M) and GNN-Vessel used only 7% training time (0.25 h) and 0.7% parameters (0.2 M). We combined CNN-CT, PNN-Vessel and GNN-Vessel with the weights obtained from multiple variable regression methods, which achieved the best PFT estimation accuracy (ICC of 0.748, 0.742, 0.836 and 0.835 for the four PFT measures respectively). The results verified that more detailed vessel information could provide further explanation for PFT estimation from anatomical imaging.
肺功能检查(PFT)是衡量系统性硬化症患者间质性肺病严重程度的重要临床指标。然而,如果存在疾病传播的风险或其他禁忌症,就不能总是通过肺活量测定来进行肺功能测试。此外,肺功能如何受到肺血管变化的影响尚不清楚。因此,以前曾有人提出用卷积神经网络(CNN)从胸部 CT 扫描(CNN-CT)和提取的血管(CNN-Vessel)中估算 PFT。然而,由于 GPU 内存的限制,这些网络使用的是向下采样的图像,这会导致小血管信息的丢失。以往的文献表明,CT 扫描中的详细血管信息有助于 PFT 估算。因此,本文提出使用点云神经网络(PNN-Vessel)和图神经网络(GNN-Vessel),分别从点云和基于图的肺血管中心线表征来估计 PFT。之后,我们将不同的网络结合起来,并进行多变量逐步回归分析,以探讨除 CNN-CT 外,基于血管的网络是否也能为 PFT 估算做出贡献。结果表明,在四个 PFT 指标的类内相关系数 (ICC) 分数平均值上,PNN-Vessel 和 GNN-Vessel 的性能分别比 CNN-Vessel 高出 14% 和 4%。此外,与 CNN-Vessel 相比,PNN-Vessel 使用了 30% 的训练时间(1.1 小时)和 7% 的参数(2.1 M),而 GNN-Vessel 仅使用了 7% 的训练时间(0.25 小时)和 0.7% 的参数(0.2 M)。我们将 CNN-CT、PNN-Vessel 和 GNN-Vessel 与多元变量回归方法获得的权重相结合,获得了最佳的 PFT 估计精度(四种 PFT 指标的 ICC 分别为 0.748、0.742、0.836 和 0.835)。结果证实,更详细的血管信息可为解剖成像的 PFT 估计提供进一步解释。
{"title":"Using 3D point cloud and graph-based neural networks to improve the estimation of pulmonary function tests from chest CT","authors":"","doi":"10.1016/j.compbiomed.2024.109192","DOIUrl":"10.1016/j.compbiomed.2024.109192","url":null,"abstract":"<div><div>Pulmonary function tests (PFTs) are important clinical metrics to measure the severity of interstitial lung disease for systemic sclerosis patients. However, PFTs cannot always be performed by spirometry if there is a risk of disease transmission or other contraindications. In addition, it is unclear how lung function is affected by changes in lung vessels. Therefore, convolution neural networks (CNNs) were previously proposed to estimate PFTs from chest CT scans (CNN-CT) and extracted vessels (CNN-Vessel). Due to GPU memory constraints, however, these networks used down-sampled images, which causes a loss of information on small vessels. Previous literature has indicated that detailed vessel information from CT scans can be helpful for PFT estimation. Therefore, this paper proposes to use a point cloud neural network (PNN-Vessel) and graph neural network (GNN-Vessel) to estimate PFTs from point cloud and graph-based representations of pulmonary vessel centerlines, respectively. After that, we combine different networks and perform multiple variable step-wise regression analysis to explore if vessel-based networks can contribute to the PFT estimation, in addition to CNN-CT. Results showed that both PNN-Vessel and GNN-Vessel outperformed CNN-Vessel, by 14% and 4%, respectively, when averaged across the intra-class correlation coefficient (ICC) scores of four PFTs metrics. In addition, compared to CNN-Vessel, PNN-Vessel used 30% of training time (1.1 h) and 7% parameters (2.1 M) and GNN-Vessel used only 7% training time (0.25 h) and 0.7% parameters (0.2 M). We combined CNN-CT, PNN-Vessel and GNN-Vessel with the weights obtained from multiple variable regression methods, which achieved the best PFT estimation accuracy (ICC of 0.748, 0.742, 0.836 and 0.835 for the four PFT measures respectively). The results verified that more detailed vessel information could provide further explanation for PFT estimation from anatomical imaging.</div></div>","PeriodicalId":10578,"journal":{"name":"Computers in biology and medicine","volume":null,"pages":null},"PeriodicalIF":7.0,"publicationDate":"2024-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142343057","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Wfold: A new method for predicting RNA secondary structure with deep learning Wfold:利用深度学习预测 RNA 二级结构的新方法。
IF 7 2区 医学 Q1 BIOLOGY Pub Date : 2024-09-27 DOI: 10.1016/j.compbiomed.2024.109207
Precise estimations of RNA secondary structures have the potential to reveal the various roles that non-coding RNAs play in regulating cellular activity. However, the mainstay of traditional RNA secondary structure prediction methods relies on thermos-dynamic models via free energy minimization, a laborious process that requires a lot of prior knowledge. Here, RNA secondary structure prediction using Wfold, an end-to-end deep learning-based approach, is suggested. Wfold is trained directly on annotated data and base-pairing criteria. It makes use of an image-like representation of RNA sequences, which an enhanced U-net incorporated with a transformer encoder can process effectively. Wfold eventually increases the accuracy of RNA secondary structure prediction by combining the benefits of self-attention mechanism's mining of long-range information with U-net's ability to gather local information. We compare Wfold's performance using RNA datasets that are within and across families. When trained and evaluated on different RNA families, it achieves a similar performance as the traditional methods, but dramatically outperforms the state-of-the-art methods on within-family datasets. Moreover, Wfold can also reliably forecast pseudoknots. The findings imply that Wfold may be useful for improving sequence alignment, functional annotations, and RNA structure modeling.
对 RNA 二级结构的精确估计有可能揭示非编码 RNA 在调节细胞活动中的各种作用。然而,传统的 RNA 二级结构预测方法主要依赖于通过自由能最小化的热动力模型,这是一个需要大量先验知识的费力过程。在此,建议使用基于端到端深度学习的方法 Wfold 进行 RNA 二级结构预测。Wfold 直接根据注释数据和碱基配对标准进行训练。它利用 RNA 序列的图像式表示,结合了变压器编码器的增强型 U-net 可以有效地处理这些图像。Wfold 将自我注意机制挖掘长程信息的优势与 U-net 收集局部信息的能力相结合,最终提高了 RNA 二级结构预测的准确性。我们使用族内和跨族的 RNA 数据集比较了 Wfold 的性能。当在不同的 RNA 家族上进行训练和评估时,它取得了与传统方法相似的性能,但在家族内数据集上的性能却大大超过了最先进的方法。此外,Wfold 还能可靠地预测假结点。研究结果表明,Wfold 可用于改进序列比对、功能注释和 RNA 结构建模。
{"title":"Wfold: A new method for predicting RNA secondary structure with deep learning","authors":"","doi":"10.1016/j.compbiomed.2024.109207","DOIUrl":"10.1016/j.compbiomed.2024.109207","url":null,"abstract":"<div><div>Precise estimations of RNA secondary structures have the potential to reveal the various roles that non-coding RNAs play in regulating cellular activity. However, the mainstay of traditional RNA secondary structure prediction methods relies on thermos-dynamic models via free energy minimization, a laborious process that requires a lot of prior knowledge. Here, RNA secondary structure prediction using Wfold, an end-to-end deep learning-based approach, is suggested. Wfold is trained directly on annotated data and base-pairing criteria. It makes use of an image-like representation of RNA sequences, which an enhanced U-net incorporated with a transformer encoder can process effectively. Wfold eventually increases the accuracy of RNA secondary structure prediction by combining the benefits of self-attention mechanism's mining of long-range information with U-net's ability to gather local information. We compare Wfold's performance using RNA datasets that are within and across families. When trained and evaluated on different RNA families, it achieves a similar performance as the traditional methods, but dramatically outperforms the state-of-the-art methods on within-family datasets. Moreover, Wfold can also reliably forecast pseudoknots. The findings imply that Wfold may be useful for improving sequence alignment, functional annotations, and RNA structure modeling.</div></div>","PeriodicalId":10578,"journal":{"name":"Computers in biology and medicine","volume":null,"pages":null},"PeriodicalIF":7.0,"publicationDate":"2024-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142343058","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robust and smooth Couinaud segmentation via anatomical structure-guided point-voxel network 通过解剖结构引导的点-体素网络进行稳健平滑的 Couinaud 分割。
IF 7 2区 医学 Q1 BIOLOGY Pub Date : 2024-09-27 DOI: 10.1016/j.compbiomed.2024.109202
Precise Couinaud segmentation from preoperative liver computed tomography (CT) is crucial for surgical planning and lesion examination. However, this task is challenging as it is defined based on vessel structures, and there is no intensity contrast between adjacent Couinaud segments in CT images. To solve this challenge, we design a multi-scale point-voxel fusion framework, which can more effectively model the spatial relationship of points and the semantic information of the image, producing robust and smooth Couinaud segmentations. Specifically, we first segment the liver and vessels from the CT image and generate 3D liver point clouds and voxel grids embedded with the vessel structure. Then, our method with two input-specific branches extracts complementary feature representations from points and voxels, respectively. The local attention module adaptively fuses features from the two branches at different scales to balance the contribution of different branches in learning more discriminative features. Furthermore, we propose a novel distance loss at the feature level to make the features in the segment more compact, thereby improving the certainty of segmentation between segments. Our experimental results on three public liver datasets demonstrate that our proposed method outperforms several state-of-the-art methods by large margins. Specifically, in out-of-distribution (OOD) testing of LiTS dataset, our method exceeded the voxel-based 3D UNet by approximately 20% in Dice score, and outperformed the point-based PointNet2Plus by approximately 8% in Dice score. Our code and manual annotations of the public datasets presented in this paper are available online: https://github.com/xukun-zhang/Couinaud-Segmentation.
从术前肝脏计算机断层扫描(CT)中进行精确的 Couinaud 分割对手术规划和病灶检查至关重要。然而,这项任务具有挑战性,因为它是根据血管结构定义的,而且 CT 图像中相邻 Couinaud 区段之间没有强度对比。为了解决这一难题,我们设计了一个多尺度点-体素融合框架,它能更有效地模拟点的空间关系和图像的语义信息,从而产生稳健、平滑的 Couinaud 分割。具体来说,我们首先从 CT 图像中分割出肝脏和血管,并生成三维肝脏点云和嵌入血管结构的体素网格。然后,我们的方法通过两个输入特定分支,分别从点和体素中提取互补特征表征。局部注意力模块会在不同尺度上自适应地融合来自两个分支的特征,以平衡不同分支在学习更具区分性特征方面的贡献。此外,我们还提出了一种新颖的特征级距离损失,使片段中的特征更加紧凑,从而提高了片段间分割的确定性。我们在三个公共肝脏数据集上的实验结果表明,我们提出的方法在很大程度上优于几种最先进的方法。具体来说,在 LiTS 数据集的分布外(OOD)测试中,我们的方法在 Dice 分数上比基于体素的 3D UNet 高出约 20%,在 Dice 分数上比基于点的 PointNet2Plus 高出约 8%。本文中介绍的公共数据集的代码和手动注释可在线获取:https://github.com/xukun-zhang/Couinaud-Segmentation。
{"title":"Robust and smooth Couinaud segmentation via anatomical structure-guided point-voxel network","authors":"","doi":"10.1016/j.compbiomed.2024.109202","DOIUrl":"10.1016/j.compbiomed.2024.109202","url":null,"abstract":"<div><div>Precise Couinaud segmentation from preoperative liver computed tomography (CT) is crucial for surgical planning and lesion examination. However, this task is challenging as it is defined based on vessel structures, and there is no intensity contrast between adjacent Couinaud segments in CT images. To solve this challenge, we design a multi-scale point-voxel fusion framework, which can more effectively model the spatial relationship of points and the semantic information of the image, producing robust and smooth Couinaud segmentations. Specifically, we first segment the liver and vessels from the CT image and generate 3D liver point clouds and voxel grids embedded with the vessel structure. Then, our method with two input-specific branches extracts complementary feature representations from points and voxels, respectively. The local attention module adaptively fuses features from the two branches at different scales to balance the contribution of different branches in learning more discriminative features. Furthermore, we propose a novel distance loss at the feature level to make the features in the segment more compact, thereby improving the certainty of segmentation between segments. Our experimental results on three public liver datasets demonstrate that our proposed method outperforms several state-of-the-art methods by large margins. Specifically, in out-of-distribution (OOD) testing of LiTS dataset, our method exceeded the voxel-based 3D UNet by approximately 20% in Dice score, and outperformed the point-based PointNet2Plus by approximately 8% in Dice score. Our code and manual annotations of the public datasets presented in this paper are available online: <span><span>https://github.com/xukun-zhang/Couinaud-Segmentation</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":10578,"journal":{"name":"Computers in biology and medicine","volume":null,"pages":null},"PeriodicalIF":7.0,"publicationDate":"2024-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142343054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Research on carotid artery plaque anomaly detection algorithm based on ultrasound images 基于超声图像的颈动脉斑块异常检测算法研究。
IF 7 2区 医学 Q1 BIOLOGY Pub Date : 2024-09-27 DOI: 10.1016/j.compbiomed.2024.109180
Carotid artery plaque is a key factor in stroke and other cardiovascular diseases. Accurate detection and localization of carotid artery plaque are essential for early prevention and treatment of diseases. However, current carotid artery ultrasound image anomaly detection algorithms face several challenges, such as scarcity of anomaly data in carotid arteries and traditional convolutional neural networks (CNNs) overlooking long-distance dependencies in image processing. To address these issues, we propose an anomaly detection algorithm for carotid artery plaques based on ultrasound images. The algorithm innovatively introduces an anomaly sample pair generation method to increase dataset diversity. Moreover, it employs an improved adaptive recursive gating pyramid pooling module to extract image features. This module significantly enhances the model’s capacity for high-order spatial interactions and adaptive feature fusion, thereby greatly improving the neural network’s feature extraction ability. The algorithm uses a Sigmoid layer to map each pixel’s feature vector to a probability distribution between 0 and 1, and anomalies are detected through probability threshold binarization. Experimental results show that our algorithm’s AUROC index reached 90.7% on a carotid artery dataset, improving by 2.1% compared to the FPI method. This research is expected to provide robust support for the early prevention and treatment of cardiovascular diseases.
颈动脉斑块是导致中风和其他心血管疾病的关键因素。准确检测和定位颈动脉斑块对于疾病的早期预防和治疗至关重要。然而,目前的颈动脉超声图像异常检测算法面临着一些挑战,如颈动脉异常数据稀缺、传统卷积神经网络(CNN)在图像处理中忽略了长距离依赖性等。为解决这些问题,我们提出了一种基于超声图像的颈动脉斑块异常检测算法。该算法创新性地引入了异常样本对生成方法,以增加数据集的多样性。此外,它还采用了改进的自适应递归门控金字塔池模块来提取图像特征。该模块极大地增强了模型的高阶空间交互能力和自适应特征融合能力,从而大大提高了神经网络的特征提取能力。该算法使用 Sigmoid 层将每个像素的特征向量映射为介于 0 和 1 之间的概率分布,并通过概率阈值二值化检测异常。实验结果表明,我们的算法在颈动脉数据集上的 AUROC 指数达到了 90.7%,与 FPI 方法相比提高了 2.1%。这项研究有望为心血管疾病的早期预防和治疗提供有力支持。
{"title":"Research on carotid artery plaque anomaly detection algorithm based on ultrasound images","authors":"","doi":"10.1016/j.compbiomed.2024.109180","DOIUrl":"10.1016/j.compbiomed.2024.109180","url":null,"abstract":"<div><div>Carotid artery plaque is a key factor in stroke and other cardiovascular diseases. Accurate detection and localization of carotid artery plaque are essential for early prevention and treatment of diseases. However, current carotid artery ultrasound image anomaly detection algorithms face several challenges, such as scarcity of anomaly data in carotid arteries and traditional convolutional neural networks (CNNs) overlooking long-distance dependencies in image processing. To address these issues, we propose an anomaly detection algorithm for carotid artery plaques based on ultrasound images. The algorithm innovatively introduces an anomaly sample pair generation method to increase dataset diversity. Moreover, it employs an improved adaptive recursive gating pyramid pooling module to extract image features. This module significantly enhances the model’s capacity for high-order spatial interactions and adaptive feature fusion, thereby greatly improving the neural network’s feature extraction ability. The algorithm uses a Sigmoid layer to map each pixel’s feature vector to a probability distribution between 0 and 1, and anomalies are detected through probability threshold binarization. Experimental results show that our algorithm’s AUROC index reached 90.7% on a carotid artery dataset, improving by 2.1% compared to the FPI method. This research is expected to provide robust support for the early prevention and treatment of cardiovascular diseases.</div></div>","PeriodicalId":10578,"journal":{"name":"Computers in biology and medicine","volume":null,"pages":null},"PeriodicalIF":7.0,"publicationDate":"2024-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142343053","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Drug-induced torsadogenicity prediction model: An explainable machine learning-driven quantitative structure-toxicity relationship approach 药物诱导致裂性预测模型:可解释的机器学习驱动的定量结构-毒性关系方法
IF 7 2区 医学 Q1 BIOLOGY Pub Date : 2024-09-26 DOI: 10.1016/j.compbiomed.2024.109209
Drug-induced Torsade de Pointes (TdP), a life-threatening polymorphic ventricular tachyarrhythmia, emerges due to the cardiotoxic effects of pharmaceuticals. The need for precise mechanisms and clinical biomarkers to detect this adverse effect presents substantial challenges in drug safety assessment. In this study, we propose that analyzing the physicochemical properties of pharmaceuticals can provide valuable insights into their potential for torsadogenic cardiotoxicity. Our research centers on estimating TdP risk based on the molecular structure of drugs. We introduce a novel quantitative structure-toxicity relationship (QSTR) prediction model that leverages an in silico approach developed by adopting the 4R rule in laboratory animals. This approach eliminates the need for animal testing, saves time, and reduces cost. Our algorithm has successfully predicted the torsadogenic risks of various pharmaceutical compounds. To develop this model, we employed Support Vector Machine (SVM) and ensemble techniques, including Random Forest (RF), Extreme Gradient Boosting (XGBoost), and Categorical Boosting (CatBoost). We enhanced the model's predictive accuracy through a rigorous two-step feature selection process. Furthermore, we utilized the SHapley Additive exPlanations (SHAP) technique to explain the prediction of torsadogenic risk, particularly within the RF model. This study represents a significant step towards creating a robust QSTR model, which can serve as an early screening tool for assessing the torsadogenic potential of pharmaceutical candidates or existing drugs. By incorporating molecular structure-based insights, we aim to enhance drug safety evaluation and minimize the risks of drug-induced TdP, ultimately benefiting both patients and the pharmaceutical industry.
药物诱发的 Torsade de Pointes(TdP)是一种危及生命的多形性室性快速心律失常,它的出现是由于药物的心脏毒性效应。检测这种不良反应需要精确的机制和临床生物标志物,这给药物安全性评估带来了巨大挑战。在本研究中,我们提出分析药物的理化性质可以为了解其潜在的心肌毒性提供有价值的信息。我们的研究重点是根据药物的分子结构来估计 TdP 风险。我们介绍了一种新的定量结构-毒性关系(QSTR)预测模型,该模型利用了在实验动物中采用 4R 规则开发的硅学方法。这种方法无需进行动物试验,节省了时间,降低了成本。我们的算法已成功预测了各种药物化合物的致扭风险。为了开发该模型,我们采用了支持向量机(SVM)和集合技术,包括随机森林(RF)、极梯度提升(XGBoost)和分类提升(CatBoost)。我们通过严格的两步特征选择过程提高了模型的预测准确性。此外,我们还利用 SHapley Additive exPlanations(SHAP)技术来解释对致扭风险的预测,尤其是在 RF 模型中。这项研究标志着我们在创建稳健的 QSTR 模型方面迈出了重要一步,该模型可作为早期筛选工具,用于评估候选药物或现有药物的致扭转潜力。通过结合基于分子结构的见解,我们旨在加强药物安全性评估,最大限度地降低药物诱发 TdP 的风险,最终使患者和制药行业受益。
{"title":"Drug-induced torsadogenicity prediction model: An explainable machine learning-driven quantitative structure-toxicity relationship approach","authors":"","doi":"10.1016/j.compbiomed.2024.109209","DOIUrl":"10.1016/j.compbiomed.2024.109209","url":null,"abstract":"<div><div>Drug-induced Torsade de Pointes (TdP), a life-threatening polymorphic ventricular tachyarrhythmia, emerges due to the cardiotoxic effects of pharmaceuticals. The need for precise mechanisms and clinical biomarkers to detect this adverse effect presents substantial challenges in drug safety assessment. In this study, we propose that analyzing the physicochemical properties of pharmaceuticals can provide valuable insights into their potential for torsadogenic cardiotoxicity. Our research centers on estimating TdP risk based on the molecular structure of drugs. We introduce a novel quantitative structure-toxicity relationship (QSTR) prediction model that leverages an <em>in silico</em> approach developed by adopting the 4R rule in laboratory animals. This approach eliminates the need for animal testing, saves time, and reduces cost. Our algorithm has successfully predicted the torsadogenic risks of various pharmaceutical compounds. To develop this model, we employed Support Vector Machine (SVM) and ensemble techniques, including Random Forest (RF), Extreme Gradient Boosting (XGBoost), and Categorical Boosting (CatBoost). We enhanced the model's predictive accuracy through a rigorous two-step feature selection process. Furthermore, we utilized the SHapley Additive exPlanations (SHAP) technique to explain the prediction of torsadogenic risk, particularly within the RF model. This study represents a significant step towards creating a robust QSTR model, which can serve as an early screening tool for assessing the torsadogenic potential of pharmaceutical candidates or existing drugs. By incorporating molecular structure-based insights, we aim to enhance drug safety evaluation and minimize the risks of drug-induced TdP, ultimately benefiting both patients and the pharmaceutical industry.</div></div>","PeriodicalId":10578,"journal":{"name":"Computers in biology and medicine","volume":null,"pages":null},"PeriodicalIF":7.0,"publicationDate":"2024-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142322592","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Continual learning in medical image analysis: A survey 医学图像分析中的持续学习:调查
IF 7 2区 医学 Q1 BIOLOGY Pub Date : 2024-09-26 DOI: 10.1016/j.compbiomed.2024.109206
In the dynamic realm of practical clinical scenarios, Continual Learning (CL) has gained increasing interest in medical image analysis due to its potential to address major challenges associated with data privacy, model adaptability, memory inefficiency, prediction robustness and detection accuracy. In general, the primary challenge in adapting and advancing CL remains catastrophic forgetting. Beyond this challenge, recent years have witnessed a growing body of work that expands our comprehension and application of continual learning in the medical domain, highlighting its practical significance and intricacy. In this paper, we present an in-depth and up-to-date review of the application of CL in medical image analysis. Our discussion delves into the strategies employed to address specific tasks within the medical domain, categorizing existing CL methods into three settings: Task-Incremental Learning, Class-Incremental Learning, and Domain-Incremental Learning. These settings are further subdivided based on representative learning strategies, allowing us to assess their strengths and weaknesses in the context of various medical scenarios. By establishing a correlation between each medical challenge and the corresponding insights provided by CL, we provide a comprehensive understanding of the potential impact of these techniques. To enhance the utility of our review, we provide an overview of the commonly used benchmark medical datasets and evaluation metrics in the field. Through a comprehensive comparison, we discuss promising future directions for the application of CL in medical image analysis. A comprehensive list of studies is being continuously updated at https://github.com/xw1519/Continual-Learning-Medical-Adaptation.
在实际临床场景的动态领域中,持续学习(CL)在医学图像分析中获得了越来越多的关注,因为它有可能解决与数据隐私、模型适应性、内存低效、预测稳健性和检测准确性相关的主要挑战。一般来说,适应和推进 CL 的主要挑战仍然是灾难性遗忘。除了这一挑战之外,近年来有越来越多的研究拓展了我们对持续学习在医学领域的理解和应用,凸显了其实际意义和复杂性。在本文中,我们对持续学习在医学图像分析中的应用进行了深入和最新的回顾。我们的讨论深入探讨了针对医疗领域特定任务所采用的策略,并将现有的持续学习方法分为三种情况:任务增强学习(Task-Incremental Learning)、类别增强学习(Class-Incremental Learning)和领域增强学习(Domain-Incremental Learning)。根据具有代表性的学习策略,这些环境又被进一步细分,使我们能够在各种医疗场景中评估它们的优缺点。通过在每个医疗挑战与CL提供的相应见解之间建立关联,我们可以全面了解这些技术的潜在影响。为了提高综述的实用性,我们概述了该领域常用的基准医疗数据集和评估指标。通过全面比较,我们讨论了 CL 在医学图像分析中的应用前景。https://github.com/xw1519/Continual-Learning-Medical-Adaptation 网站将不断更新综合研究列表。
{"title":"Continual learning in medical image analysis: A survey","authors":"","doi":"10.1016/j.compbiomed.2024.109206","DOIUrl":"10.1016/j.compbiomed.2024.109206","url":null,"abstract":"<div><div>In the dynamic realm of practical clinical scenarios, Continual Learning (CL) has gained increasing interest in medical image analysis due to its potential to address major challenges associated with data privacy, model adaptability, memory inefficiency, prediction robustness and detection accuracy. In general, the primary challenge in adapting and advancing CL remains catastrophic forgetting. Beyond this challenge, recent years have witnessed a growing body of work that expands our comprehension and application of continual learning in the medical domain, highlighting its practical significance and intricacy. In this paper, we present an in-depth and up-to-date review of the application of CL in medical image analysis. Our discussion delves into the strategies employed to address specific tasks within the medical domain, categorizing existing CL methods into three settings: Task-Incremental Learning, Class-Incremental Learning, and Domain-Incremental Learning. These settings are further subdivided based on representative learning strategies, allowing us to assess their strengths and weaknesses in the context of various medical scenarios. By establishing a correlation between each medical challenge and the corresponding insights provided by CL, we provide a comprehensive understanding of the potential impact of these techniques. To enhance the utility of our review, we provide an overview of the commonly used benchmark medical datasets and evaluation metrics in the field. Through a comprehensive comparison, we discuss promising future directions for the application of CL in medical image analysis. A comprehensive list of studies is being continuously updated at <span><span>https://github.com/xw1519/Continual-Learning-Medical-Adaptation</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":10578,"journal":{"name":"Computers in biology and medicine","volume":null,"pages":null},"PeriodicalIF":7.0,"publicationDate":"2024-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142323149","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Accurate detection and instance segmentation of unstained living adherent cells in differential interference contrast images 在微分干涉对比图像中准确检测和实例分割未染色的活体粘附细胞
IF 7 2区 医学 Q1 BIOLOGY Pub Date : 2024-09-26 DOI: 10.1016/j.compbiomed.2024.109151
Detecting and segmenting unstained living adherent cells in differential interference contrast (DIC) images is crucial in biomedical research, such as cell microinjection, cell tracking, cell activity characterization, and revealing cell phenotypic transition dynamics. We present a robust approach, starting with dataset transformation. We curated 520 pairs of DIC images, containing 12,198 HepG2 cells, with ground truth annotations. The original dataset was randomly split into training, validation, and test sets. Rotations were applied to images in the training set, creating an interim “α set.” Similar transformations formed “β” and “γ sets” for validation and test data. The α set trained a Mask R-CNN, while the β set produced predictions, subsequently filtered and categorized. A residual network (ResNet) classifier determined mask retention. The γ set underwent iterative processing, yielding final segmentation. Our method achieved a weighted average of 0.567 in average precision (AP)0.75bbox and 0.673 in AP0.75segm, both outperforming major algorithms for cell detection and segmentation. Visualization also revealed that our method excels in practicality, accurately capturing nearly every cell, a marked improvement over alternatives.
检测和分割微分干涉对比(DIC)图像中未染色的活体粘附细胞在生物医学研究中至关重要,例如细胞显微注射、细胞追踪、细胞活动表征以及揭示细胞表型转变动态。我们从数据集转换入手,提出了一种稳健的方法。我们整理了 520 对 DIC 图像,其中包含 12,198 个 HepG2 细胞,并附有地面实况注释。原始数据集被随机分成训练集、验证集和测试集。对训练集中的图像进行旋转,形成临时的 "α集"。类似的变换形成了验证和测试数据的 "β 集 "和 "γ 集"。α 集 "训练了一个 "掩码 R-CNN",而 "β 集 "产生了预测结果,随后进行了过滤和分类。残差网络(ResNet)分类器确定掩码的保留。γ 集经过迭代处理,得出最终的分割结果。我们的方法在平均精度(AP)0.75bbox 和 AP0.75segm 上分别达到了 0.567 和 0.673 的加权平均值,均优于主要的细胞检测和分割算法。可视化结果还显示,我们的方法在实用性方面表现出色,几乎能准确捕捉到每一个细胞,比其他方法有明显进步。
{"title":"Accurate detection and instance segmentation of unstained living adherent cells in differential interference contrast images","authors":"","doi":"10.1016/j.compbiomed.2024.109151","DOIUrl":"10.1016/j.compbiomed.2024.109151","url":null,"abstract":"<div><div>Detecting and segmenting unstained living adherent cells in differential interference contrast (DIC) images is crucial in biomedical research, such as cell microinjection, cell tracking, cell activity characterization, and revealing cell phenotypic transition dynamics. We present a robust approach, starting with dataset transformation. We curated 520 pairs of DIC images, containing 12,198 HepG2 cells, with ground truth annotations. The original dataset was randomly split into training, validation, and test sets. Rotations were applied to images in the training set, creating an interim “<span><math><mi>α</mi></math></span> set.” Similar transformations formed “<span><math><mi>β</mi></math></span>” and “<span><math><mi>γ</mi></math></span> sets” for validation and test data. The <span><math><mi>α</mi></math></span> set trained a Mask R-CNN, while the <span><math><mi>β</mi></math></span> set produced predictions, subsequently filtered and categorized. A residual network (ResNet) classifier determined mask retention. The <span><math><mi>γ</mi></math></span> set underwent iterative processing, yielding final segmentation. Our method achieved a weighted average of 0.567 in <span><math><msubsup><mrow><mtext>average precision (AP)</mtext></mrow><mrow><mtext>0.75</mtext></mrow><mrow><mtext>bbox</mtext></mrow></msubsup></math></span> and 0.673 in <span><math><msubsup><mrow><mtext>AP</mtext></mrow><mrow><mtext>0.75</mtext></mrow><mrow><mtext>segm</mtext></mrow></msubsup></math></span>, both outperforming major algorithms for cell detection and segmentation. Visualization also revealed that our method excels in practicality, accurately capturing nearly every cell, a marked improvement over alternatives.</div></div>","PeriodicalId":10578,"journal":{"name":"Computers in biology and medicine","volume":null,"pages":null},"PeriodicalIF":7.0,"publicationDate":"2024-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142322637","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A multimodal cross-transformer-based model to predict mild cognitive impairment using speech, language and vision 利用语音、语言和视觉预测轻度认知障碍的多模态交叉变换器模型
IF 7 2区 医学 Q1 BIOLOGY Pub Date : 2024-09-26 DOI: 10.1016/j.compbiomed.2024.109199
Mild Cognitive Impairment (MCI) is an early stage of memory loss or other cognitive ability loss in individuals who maintain the ability to independently perform most activities of daily living. It is considered a transitional stage between normal cognitive stage and more severe cognitive declines like dementia or Alzheimer’s. Based on the reports from the National Institute of Aging (NIA), people with MCI are at a greater risk of developing dementia, thus it is of great importance to detect MCI at the earliest possible to mitigate the transformation of MCI to Alzheimer’s and dementia. Recent studies have harnessed Artificial Intelligence (AI) to develop automated methods to predict and detect MCI. The majority of the existing research is based on unimodal data (e.g., only speech or prosody), but recent studies have shown that multimodality leads to a more accurate prediction of MCI. However, effectively exploiting different modalities is still a big challenge due to the lack of efficient fusion methods. This study proposes a robust fusion architecture utilizing an embedding-level fusion via a co-attention mechanism to leverage multimodal data for MCI prediction. This approach addresses the limitations of early and late fusion methods, which often fail to preserve inter-modal relationships. Our embedding-level fusion aims to capture complementary information across modalities, enhancing predictive accuracy. We used the I-CONECT dataset, where a large number of semi-structured conversations via internet/webcam between participants aged 75+ years old and interviewers were recorded. We introduce a multimodal speech-language-vision Deep Learning-based method to differentiate MCI from Normal Cognition (NC). Our proposed architecture includes co-attention blocks to fuse three different modalities at the embedding level to find the potential interactions between speech (audio), language (transcribed speech), and vision (facial videos) within the cross-Transformer layer. Experimental results demonstrate that our fusion method achieves an average AUC of 85.3% in detecting MCI from NC, significantly outperforming unimodal (60.9%) and bimodal (76.3%) baseline models. This superior performance highlights the effectiveness of our model in capturing and utilizing the complementary information from multiple modalities, offering a more accurate and reliable approach for MCI prediction.
轻度认知功能障碍(MCI)是记忆力减退或其他认知能力丧失的早期阶段,患者仍能独立完成大部分日常生活活动。它被认为是正常认知阶段与痴呆症或阿尔茨海默氏症等更严重认知能力下降之间的过渡阶段。根据美国国家老龄化研究所(NIA)的报告,患有 MCI 的人患痴呆症的风险更大,因此尽早发现 MCI 以减轻 MCI 向阿尔茨海默氏症和痴呆症的转化至关重要。最近的研究利用人工智能(AI)开发了预测和检测 MCI 的自动化方法。现有的大部分研究都是基于单模态数据(如仅语音或前音),但最近的研究表明,多模态数据能更准确地预测 MCI。然而,由于缺乏高效的融合方法,有效利用不同模态仍然是一个巨大的挑战。本研究提出了一种稳健的融合架构,利用共同关注机制进行嵌入式融合,从而利用多模态数据进行 MCI 预测。这种方法解决了早期和晚期融合方法的局限性,因为早期和晚期融合方法往往无法保留模态间的关系。我们的嵌入级融合旨在捕捉跨模态的互补信息,从而提高预测的准确性。我们使用了 I-CONECT 数据集,该数据集记录了 75 岁以上的参与者与访谈者之间通过互联网/网络摄像头进行的大量半结构化对话。我们介绍了一种基于深度学习的多模态语音-语言-视觉方法,用于区分 MCI 和正常认知(NC)。我们提出的架构包括共同关注块,用于在嵌入层融合三种不同的模态,从而在交叉转换层中找到语音(音频)、语言(转录语音)和视觉(面部视频)之间的潜在交互。实验结果表明,我们的融合方法在从 NC 检测 MCI 方面的平均 AUC 达到了 85.3%,明显优于单模态(60.9%)和双模态(76.3%)基线模型。这种优异的表现凸显了我们的模型在捕捉和利用来自多种模态的互补信息方面的有效性,为 MCI 预测提供了一种更准确、更可靠的方法。
{"title":"A multimodal cross-transformer-based model to predict mild cognitive impairment using speech, language and vision","authors":"","doi":"10.1016/j.compbiomed.2024.109199","DOIUrl":"10.1016/j.compbiomed.2024.109199","url":null,"abstract":"<div><div>Mild Cognitive Impairment (MCI) is an early stage of memory loss or other cognitive ability loss in individuals who maintain the ability to independently perform most activities of daily living. It is considered a transitional stage between normal cognitive stage and more severe cognitive declines like dementia or Alzheimer’s. Based on the reports from the National Institute of Aging (NIA), people with MCI are at a greater risk of developing dementia, thus it is of great importance to detect MCI at the earliest possible to mitigate the transformation of MCI to Alzheimer’s and dementia. Recent studies have harnessed Artificial Intelligence (AI) to develop automated methods to predict and detect MCI. The majority of the existing research is based on unimodal data (e.g., only speech or prosody), but recent studies have shown that multimodality leads to a more accurate prediction of MCI. However, effectively exploiting different modalities is still a big challenge due to the lack of efficient fusion methods. This study proposes a robust fusion architecture utilizing an embedding-level fusion via a co-attention mechanism to leverage multimodal data for MCI prediction. This approach addresses the limitations of early and late fusion methods, which often fail to preserve inter-modal relationships. Our embedding-level fusion aims to capture complementary information across modalities, enhancing predictive accuracy. We used the I-CONECT dataset, where a large number of semi-structured conversations via internet/webcam between participants aged 75+ years old and interviewers were recorded. We introduce a multimodal speech-language-vision Deep Learning-based method to differentiate MCI from Normal Cognition (NC). Our proposed architecture includes co-attention blocks to fuse three different modalities at the embedding level to find the potential interactions between speech (audio), language (transcribed speech), and vision (facial videos) within the cross-Transformer layer. Experimental results demonstrate that our fusion method achieves an average AUC of 85.3% in detecting MCI from NC, significantly outperforming unimodal (60.9%) and bimodal (76.3%) baseline models. This superior performance highlights the effectiveness of our model in capturing and utilizing the complementary information from multiple modalities, offering a more accurate and reliable approach for MCI prediction.</div></div>","PeriodicalId":10578,"journal":{"name":"Computers in biology and medicine","volume":null,"pages":null},"PeriodicalIF":7.0,"publicationDate":"2024-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142322596","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SparseMorph: A weakly-supervised lightweight sparse transformer for mono- and multi-modal deformable image registration SparseMorph:用于单模态和多模态可变形图像配准的弱监督轻量级稀疏变换器
IF 7 2区 医学 Q1 BIOLOGY Pub Date : 2024-09-26 DOI: 10.1016/j.compbiomed.2024.109205

Purpose

Deformable image registration (DIR) is crucial for improving the precision of clinical diagnosis. Recent Transformer-based DIR methods have shown promising performance by capturing long-range dependencies. Nevertheless, these methods still grapple with high computational complexity. This work aims to enhance the performance of DIR in both computational efficiency and registration accuracy.

Methods

We proposed a weakly-supervised lightweight Transformer model, named SparseMorph. To reduce computational complexity without compromising the representative feature capture ability, we designed a sparse multi-head self-attention (SMHA) mechanism. To accumulate representative features while preserving high computational efficiency, we constructed a multi-branch multi-layer perception (MMLP) module. Additionally, we developed an anatomically-constrained weakly-supervised strategy to guide the alignment of regions-of-interest in mono- and multi-modal images.

Results

We assessed SparseMorph in terms of registration accuracy and computational complexity.
Within the mono-modal brain datasets IXI and OASIS, our SparseMorph outperforms the state-of-the-art method TransMatch with improvements of 3.2 % and 2.9 % in DSC scores for MRI-to-CT registration tasks, respectively. Moreover, in the multi-modal cardiac dataset MMWHS, our SparseMorph shows DSC score improvements of 9.7 % and 11.4 % compared to TransMatch in MRI-to-CT and CT-to-MRI registration tasks, respectively. Notably, SparseMorph attains these performance advantages while utilizing 33.33 % of the parameters of TransMatch.

Conclusions

The proposed weakly-supervised deformable image registration model, SparseMorph, demonstrates efficiency in both mono- and multi-modal registration tasks, exhibiting superior performance compared to state-of-the-art algorithms, and establishing an effective DIR method for clinical applications.
目的可变形图像配准(DIR)对于提高临床诊断的精确度至关重要。最近,基于变换器的 DIR 方法通过捕捉长距离依赖关系,显示出良好的性能。然而,这些方法仍然面临计算复杂度高的问题。我们提出了一种弱监督的轻量级变换器模型,名为 SparseMorph。为了在不影响代表性特征捕捉能力的前提下降低计算复杂度,我们设计了一种稀疏多头自关注(SMHA)机制。为了在保持高计算效率的同时积累代表性特征,我们构建了一个多分支多层感知(MMLP)模块。在单模态脑数据集IXI和OASIS中,我们的SparseMorph在MRI-to-CT配准任务的DSC分数上分别提高了3.2%和2.9%,表现优于最先进的TransMatch方法。此外,在多模态心脏数据集 MMWHS 中,与 TransMatch 相比,我们的 SparseMorph 在 MRI 到 CT 和 CT 到 MRI 配准任务中的 DSC 分数分别提高了 9.7% 和 11.4%。结论所提出的弱监督可变形图像配准模型 SparseMorph 在单模态和多模态配准任务中均表现出高效率,与最先进的算法相比性能更优越,为临床应用建立了一种有效的 DIR 方法。
{"title":"SparseMorph: A weakly-supervised lightweight sparse transformer for mono- and multi-modal deformable image registration","authors":"","doi":"10.1016/j.compbiomed.2024.109205","DOIUrl":"10.1016/j.compbiomed.2024.109205","url":null,"abstract":"<div><h3>Purpose</h3><div>Deformable image registration (DIR) is crucial for improving the precision of clinical diagnosis. Recent Transformer-based DIR methods have shown promising performance by capturing long-range dependencies. Nevertheless, these methods still grapple with high computational complexity. This work aims to enhance the performance of DIR in both computational efficiency and registration accuracy.</div></div><div><h3>Methods</h3><div>We proposed a weakly-supervised lightweight Transformer model, named SparseMorph. To reduce computational complexity without compromising the representative feature capture ability, we designed a sparse multi-head self-attention (SMHA) mechanism. To accumulate representative features while preserving high computational efficiency, we constructed a multi-branch multi-layer perception (MMLP) module. Additionally, we developed an anatomically-constrained weakly-supervised strategy to guide the alignment of regions-of-interest in mono- and multi-modal images.</div></div><div><h3>Results</h3><div>We assessed SparseMorph in terms of registration accuracy and computational complexity.</div><div>Within the mono-modal brain datasets IXI and OASIS, our SparseMorph outperforms the state-of-the-art method TransMatch with improvements of 3.2 % and 2.9 % in DSC scores for MRI-to-CT registration tasks, respectively. Moreover, in the multi-modal cardiac dataset MMWHS, our SparseMorph shows DSC score improvements of 9.7 % and 11.4 % compared to TransMatch in MRI-to-CT and CT-to-MRI registration tasks, respectively. Notably, SparseMorph attains these performance advantages while utilizing 33.33 % of the parameters of TransMatch.</div></div><div><h3>Conclusions</h3><div>The proposed weakly-supervised deformable image registration model, SparseMorph, demonstrates efficiency in both mono- and multi-modal registration tasks, exhibiting superior performance compared to state-of-the-art algorithms, and establishing an effective DIR method for clinical applications.</div></div>","PeriodicalId":10578,"journal":{"name":"Computers in biology and medicine","volume":null,"pages":null},"PeriodicalIF":7.0,"publicationDate":"2024-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142322638","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Effects of spatially dense adrenergic stimulation to rotor behaviour in simulated atrial sheets 空间致密肾上腺素能刺激对模拟心房片转子行为的影响
IF 7 2区 医学 Q1 BIOLOGY Pub Date : 2024-09-26 DOI: 10.1016/j.compbiomed.2024.109195
Sympathetic hyperactivity via spatially dense adrenergic stimulation may create pro-arrhythmic substrates even without structural remodelling. However, the effect of sympathetic hyperactivity on arrhythmic activity, such as rotors, is unknown. Using simulations, we examined the effects of gradually increasing the spatial density of adrenergic stimulation (AS) in atrial sheets on rotors. We compared their characteristics against rotors hosted in atrial sheets with increasing spatial density of minimally conductive (MC) elements to simulate structural remodelling due to injury or disease. We generated rotors using an S1-S2 stimulation protocol. Then, we created phase maps to identify phase singularities and map their trajectory over time. We measured each rotor’s duration (s), angular speed (rad/s), and spatiotemporal organization. We demonstrated that atrial sheets with increased AS spatial densities could maintain rotors longer than with MC elements (2.6 ± 0.1 s vs. 1.5 ± 0.2 s, p<0.001). Moreover, rotors have higher angular speed (70 ± 7 rads/s vs. 60 ± 15 rads/s, p<0.05) and better spatiotemporal organization (0.56 ± 0.05 vs. 0.58 ± 0.18, p<0.05) in atrial sheets with less than 25% AS elements compared to MC elements. Our findings may help elucidate electrophysiological potential alterations in atrial substrates due to sympathetic hyperactivity, particularly among individuals with autonomic derangements caused by chronic distress.
即使没有结构重塑,通过空间致密肾上腺素能刺激产生的交感神经亢进也可能创造出有利于心律失常的基质。然而,交感神经过度活跃对转子等心律失常活动的影响尚不清楚。通过模拟,我们研究了逐渐增加心房片中肾上腺素能刺激(AS)的空间密度对转子的影响。我们将转子的特性与心房片中不断增加的微传导(MC)元件空间密度寄存的转子进行了比较,以模拟因损伤或疾病导致的结构重塑。我们使用 S1-S2 刺激方案生成转子。然后,我们绘制了相位图,以识别相位奇异点并绘制其随时间变化的轨迹。我们测量了每个转子的持续时间(秒)、角速度(拉德/秒)和时空组织。我们发现,与 MC 元素相比,AS 空间密度增加的心房片可维持转子的时间更长(2.6 ± 0.1 秒 vs. 1.5 ± 0.2 秒,p<0.001)。此外,与 MC 元素相比,在 AS 元素少于 25% 的心房片中,转子具有更高的角速度(70 ± 7 rads/s vs. 60 ± 15 rads/s,p<0.05)和更好的时空组织(0.56 ± 0.05 vs. 0.58 ± 0.18,p<0.05)。我们的研究结果可能有助于阐明交感神经亢进导致的心房基底电生理潜在改变,尤其是在因长期窘迫导致自律神经失调的人群中。
{"title":"Effects of spatially dense adrenergic stimulation to rotor behaviour in simulated atrial sheets","authors":"","doi":"10.1016/j.compbiomed.2024.109195","DOIUrl":"10.1016/j.compbiomed.2024.109195","url":null,"abstract":"<div><div>Sympathetic hyperactivity via spatially dense adrenergic stimulation may create pro-arrhythmic substrates even without structural remodelling. However, the effect of sympathetic hyperactivity on arrhythmic activity, such as rotors, is unknown. Using simulations, we examined the effects of gradually increasing the spatial density of adrenergic stimulation (AS) in atrial sheets on rotors. We compared their characteristics against rotors hosted in atrial sheets with increasing spatial density of minimally conductive (MC) elements to simulate structural remodelling due to injury or disease. We generated rotors using an S1-S2 stimulation protocol. Then, we created phase maps to identify phase singularities and map their trajectory over time. We measured each rotor’s duration (s), angular speed (rad/s), and spatiotemporal organization. We demonstrated that atrial sheets with increased AS spatial densities could maintain rotors longer than with MC elements (2.6 ± 0.1 s vs. 1.5 ± 0.2 s, <span><math><mrow><mi>p</mi><mo>&lt;</mo><mn>0</mn><mo>.</mo><mn>001</mn></mrow></math></span>). Moreover, rotors have higher angular speed (70 ± 7 rads/s vs. 60 ± 15 rads/s, <span><math><mrow><mi>p</mi><mo>&lt;</mo><mn>0</mn><mo>.</mo><mn>05</mn></mrow></math></span>) and better spatiotemporal organization (0.56 ± 0.05 vs. 0.58 ± 0.18, <span><math><mrow><mi>p</mi><mo>&lt;</mo><mn>0</mn><mo>.</mo><mn>05</mn></mrow></math></span>) in atrial sheets with less than 25% AS elements compared to MC elements. Our findings may help elucidate electrophysiological potential alterations in atrial substrates due to sympathetic hyperactivity, particularly among individuals with autonomic derangements caused by chronic distress.</div></div>","PeriodicalId":10578,"journal":{"name":"Computers in biology and medicine","volume":null,"pages":null},"PeriodicalIF":7.0,"publicationDate":"2024-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142322594","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computers in biology and medicine
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1