In the realm of cardiovascular medicine, medical imaging plays a crucial role in accurately classifying cardiac diseases and making precise diagnoses. However, the integration of data science techniques in this field presents significant challenges, as it requires a large volume of images, while ethical constraints, high costs, and variability in imaging protocols limit data acquisition. As a consequence, it is necessary to investigate different avenues to overcome this challenge. In this contribution, we offer an innovative tool to conquer this limitation. In particular, we delve into the application of a well recognized method known as the eigenfaces approach to classify cardiac diseases. This approach was originally motivated for efficiently representing pictures of faces using principal component analysis, which provides a set of eigenvectors (aka eigenfaces), explaining the variation between face images. Given its effectiveness in face recognition, we sought to evaluate its applicability to more complex medical imaging datasets. In particular, we integrate this approach with convolutional neural networks to classify echocardiography images taken from mice in five distinct cardiac conditions (healthy, diabetic cardiomyopathy, myocardial infarction, obesity and TAC hypertension). The results show a substantial and noteworthy enhancement when employing the singular value decomposition for pre-processing, with classification accuracy increasing by approximately 50%.
{"title":"Eigenhearts: Cardiac diseases classification using eigenfaces approach","authors":"Nourelhouda Groun , María Villalba-Orero , Lucía Casado-Martín , Enrique Lara-Pezzi , Eusebio Valero , Soledad Le Clainche , Jesús Garicano-Mena","doi":"10.1016/j.compbiomed.2025.110167","DOIUrl":"10.1016/j.compbiomed.2025.110167","url":null,"abstract":"<div><div>In the realm of cardiovascular medicine, medical imaging plays a crucial role in accurately classifying cardiac diseases and making precise diagnoses. However, the integration of data science techniques in this field presents significant challenges, as it requires a large volume of images, while ethical constraints, high costs, and variability in imaging protocols limit data acquisition. As a consequence, it is necessary to investigate different avenues to overcome this challenge. In this contribution, we offer an innovative tool to conquer this limitation. In particular, we delve into the application of a well recognized method known as the <em>eigenfaces</em> approach to classify cardiac diseases. This approach was originally motivated for efficiently representing pictures of faces using principal component analysis, which provides a set of eigenvectors (aka <em>eigenfaces</em>), explaining the variation between face images. Given its effectiveness in face recognition, we sought to evaluate its applicability to more complex medical imaging datasets. In particular, we integrate this approach with convolutional neural networks to classify echocardiography images taken from mice in five distinct cardiac conditions (healthy, diabetic cardiomyopathy, myocardial infarction, obesity and TAC hypertension). The results show a substantial and noteworthy enhancement when employing the singular value decomposition for pre-processing, with classification accuracy increasing by approximately 50%.</div></div>","PeriodicalId":10578,"journal":{"name":"Computers in biology and medicine","volume":"192 ","pages":"Article 110167"},"PeriodicalIF":7.0,"publicationDate":"2025-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143873295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-26DOI: 10.1016/j.compbiomed.2025.110193
Beatriz Merino-Barbancho , Ana Cipric , Peña Arroyo , Miguel Rujas , Rodrigo Martín Gómez del Moral Herranz , Torben Barev , Nicholas Ciccone , Giuseppe Fico
<div><h3>Background</h3><div>Treatment non-adherence of patients stands as a major barrier to effectively manage chronic conditions. However, non-adherent behavior is estimated to affect up to 50 % of patients with chronic conditions, leading to poorer health outcomes among patients, higher rates of hospitalization, and increased mortality.</div></div><div><h3>Objective</h3><div>This study offers a provision of a structured overview of the computational methods and techniques used to build predictive models of treatment adherence of patients.</div></div><div><h3>Methods</h3><div>A scoping review was conducted, and the following databases were searched to identify relevant publications: PubMed, IEEE and Web of Science. The screening of publications consisted of two steps. First, the hits obtained from the search were independently screened and selected using an open-source machine learning (ML)-aided pipeline applying active learning: ASReview, Active learning for Systematic Reviews. Publications selected for full-text review and data extraction were those highly prioritized by ASReview.</div></div><div><h3>Results</h3><div>A total of 45 papers were selected into the second round of full-text screening and 29 papers were considered in the final review. The findings suggest supervised learning (regression and classification) to be the most used analytical approach, being the generalized linear models (GEE) (21.67 %), logistic regressions (20 %) and random forest (18.33 %) the most frequently employed techniques. The family of GEE identified in the studies included some multiple, hierarchical or mixed-effect models, among other. The selection of these models often depended on data source and types (e.g., logistic regressions for dichotomous outcome measures). Furthermore, over 54 % of adherence topics being related to chronic metabolic conditions such as diabetes, hypertension, and hyperlipidemia. Most assessed predictors were both treatment and socio-demographic and economic-related factors followed by condition-related factors. The adherence to treatment variable was mostly dichotomous (12 out of 29) and computed using metrics as the Medical Possession Ratio with a 80 % threshold. A limitation of the reviewed studies is the lack of accountancy for interrelationships between different determinants of adherence behavior, denoting the need for future research regarding the use of more complex analytical techniques that better capture these connections (e.g., patient's socio-economic status and the ability to afford medication).</div></div><div><h3>Conclusion</h3><div>The creation of systems to accurately predict treatment adherence can pave the way for improved therapeutic outcomes, reduced healthcare costs and enabling personalized treatment plans. This paper can support to understand the efforts made in the field of modeling adherence-related factors. In particular, the results provide a structured overview of the computational methods and techniqu
{"title":"Methods and computational techniques for predicting adherence to treatment: A scoping review","authors":"Beatriz Merino-Barbancho , Ana Cipric , Peña Arroyo , Miguel Rujas , Rodrigo Martín Gómez del Moral Herranz , Torben Barev , Nicholas Ciccone , Giuseppe Fico","doi":"10.1016/j.compbiomed.2025.110193","DOIUrl":"10.1016/j.compbiomed.2025.110193","url":null,"abstract":"<div><h3>Background</h3><div>Treatment non-adherence of patients stands as a major barrier to effectively manage chronic conditions. However, non-adherent behavior is estimated to affect up to 50 % of patients with chronic conditions, leading to poorer health outcomes among patients, higher rates of hospitalization, and increased mortality.</div></div><div><h3>Objective</h3><div>This study offers a provision of a structured overview of the computational methods and techniques used to build predictive models of treatment adherence of patients.</div></div><div><h3>Methods</h3><div>A scoping review was conducted, and the following databases were searched to identify relevant publications: PubMed, IEEE and Web of Science. The screening of publications consisted of two steps. First, the hits obtained from the search were independently screened and selected using an open-source machine learning (ML)-aided pipeline applying active learning: ASReview, Active learning for Systematic Reviews. Publications selected for full-text review and data extraction were those highly prioritized by ASReview.</div></div><div><h3>Results</h3><div>A total of 45 papers were selected into the second round of full-text screening and 29 papers were considered in the final review. The findings suggest supervised learning (regression and classification) to be the most used analytical approach, being the generalized linear models (GEE) (21.67 %), logistic regressions (20 %) and random forest (18.33 %) the most frequently employed techniques. The family of GEE identified in the studies included some multiple, hierarchical or mixed-effect models, among other. The selection of these models often depended on data source and types (e.g., logistic regressions for dichotomous outcome measures). Furthermore, over 54 % of adherence topics being related to chronic metabolic conditions such as diabetes, hypertension, and hyperlipidemia. Most assessed predictors were both treatment and socio-demographic and economic-related factors followed by condition-related factors. The adherence to treatment variable was mostly dichotomous (12 out of 29) and computed using metrics as the Medical Possession Ratio with a 80 % threshold. A limitation of the reviewed studies is the lack of accountancy for interrelationships between different determinants of adherence behavior, denoting the need for future research regarding the use of more complex analytical techniques that better capture these connections (e.g., patient's socio-economic status and the ability to afford medication).</div></div><div><h3>Conclusion</h3><div>The creation of systems to accurately predict treatment adherence can pave the way for improved therapeutic outcomes, reduced healthcare costs and enabling personalized treatment plans. This paper can support to understand the efforts made in the field of modeling adherence-related factors. In particular, the results provide a structured overview of the computational methods and techniqu","PeriodicalId":10578,"journal":{"name":"Computers in biology and medicine","volume":"192 ","pages":"Article 110193"},"PeriodicalIF":7.0,"publicationDate":"2025-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143873297","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Small nucleolar RNAs (snoRNAs) are increasingly recognized for their critical role in the pathogenesis and characterization of various human diseases. Consequently, the precise identification of snoRNA-disease associations (SDAs) is essential for the progression of diseases and the advancement of treatment strategies. However, conventional biological experimental approaches are costly, time-consuming, and resource-intensive; therefore, machine learning-based computational methods offer a promising solution to mitigate these limitations. This paper proposes a model called ‘GBDTSVM’, representing a novel and efficient machine learning approach for predicting snoRNA-disease associations by leveraging a Gradient Boosting Decision Tree (GBDT) and Support Vector Machine (SVM). ‘GBDTSVM’ effectively extracts integrated snoRNA-disease feature representations utilizing GBDT, and SVM is subsequently utilized to classify and identify potential associations. Furthermore, the method enhances the accuracy of these predictions by incorporating Gaussian integrated profile kernel similarity for both snoRNAs and diseases. Experimental evaluation of the GBDTSVM model demonstrates superior performance compared to state-of-the-art methods in the field, achieving an AUROC of 0.96 and an AUPRC of 0.95 on the ‘MDRF’ dataset. Moreover, our model shows superior performance on two more datasets named ‘LSGT’ and ‘PsnoD’. Additionally, a case study conducted on the predicted snoRNA-disease associations verified the top-ranked snoRNAs across twelve prevalent diseases, further validating the efficacy of the GBDTSVM approach. These results underscore the model’s potential as a robust tool for advancing snoRNA-related disease research. Source codes and datasets for our proposed framework can be obtained from: https://github.com/mariamuna04/gbdtsvm.
{"title":"GBDTSVM: Combined Support Vector Machine and Gradient Boosting Decision Tree Framework for efficient snoRNA-disease association prediction","authors":"Ummay Maria Muna , Fahim Hafiz , Shanta Biswas , Riasat Azim","doi":"10.1016/j.compbiomed.2025.110219","DOIUrl":"10.1016/j.compbiomed.2025.110219","url":null,"abstract":"<div><div>Small nucleolar RNAs (snoRNAs) are increasingly recognized for their critical role in the pathogenesis and characterization of various human diseases. Consequently, the precise identification of snoRNA-disease associations (SDAs) is essential for the progression of diseases and the advancement of treatment strategies. However, conventional biological experimental approaches are costly, time-consuming, and resource-intensive; therefore, machine learning-based computational methods offer a promising solution to mitigate these limitations. This paper proposes a model called ‘GBDTSVM’, representing a novel and efficient machine learning approach for predicting snoRNA-disease associations by leveraging a Gradient Boosting Decision Tree (GBDT) and Support Vector Machine (SVM). ‘GBDTSVM’ effectively extracts integrated snoRNA-disease feature representations utilizing GBDT, and SVM is subsequently utilized to classify and identify potential associations. Furthermore, the method enhances the accuracy of these predictions by incorporating Gaussian integrated profile kernel similarity for both snoRNAs and diseases. Experimental evaluation of the GBDTSVM model demonstrates superior performance compared to state-of-the-art methods in the field, achieving an AUROC of 0.96 and an AUPRC of 0.95 on the ‘MDRF’ dataset. Moreover, our model shows superior performance on two more datasets named ‘LSGT’ and ‘PsnoD’. Additionally, a case study conducted on the predicted snoRNA-disease associations verified the top-ranked snoRNAs across twelve prevalent diseases, further validating the efficacy of the GBDTSVM approach. These results underscore the model’s potential as a robust tool for advancing snoRNA-related disease research. Source codes and datasets for our proposed framework can be obtained from: <span><span>https://github.com/mariamuna04/gbdtsvm</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":10578,"journal":{"name":"Computers in biology and medicine","volume":"192 ","pages":"Article 110219"},"PeriodicalIF":7.0,"publicationDate":"2025-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143873294","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Effective transcranial temporal interference stimulation (tTIS) requires an optimized electrode configuration to target deep brain structures accurately. While individualized electric field analysis using high-resolution structural MRI enables precise electrode placement, its clinical practicality is limited by significant costs associated with imaging, specialized software, and navigation systems. Alternatively, standardized electrode montages optimized through population-based electric field analysis might overcome these limitations, although it remains unclear how accurately this approach approximates individualized optimization.
Aim
This study evaluates the feasibility of using group-level electric field analysis to optimize the tTIS montage. Specifically, it seeks to maximize the intracranial electric field using a population-proxy approach and compare its efficacy to individualized electric field optimization.
Method
We optimize the montage across various populations, balancing the trade-off between focality and electric field strength at deep brain targets. The method is compared to conventional individualized electric field-based optimization. Factors such as population size and age were analyzed for their impact on montage selection and effectiveness.
Results
Population-based electric field optimization demonstrated comparable focality and targeting accuracy to individualized analysis, with a difference of up to 17 %. Age mismatch between the population proxy and the target individual reduced the focality of up to 8.3 % compared to an age-matched population proxy. Also, insufficient population size led to inconsistencies in montage optimization, although these were negligible for populations larger than 40 individuals.
Conclusion
This study demonstrates the capability of population-based electric field analysis to achieve targeting effects comparable to individualized-level electric field analysis in terms of focality and intensity. By eliminating the need for patient-specific MRI scans, this approach significantly enhances the accessibility and practicality of tTIS in diverse research and clinical applications.
{"title":"Population-optimized electrode montage approximates individualized optimization in transcranial temporal interference stimulation","authors":"Kanata Yatsuda , Mariano Fernández-Corazza , Wenwei Yu , Jose Gomez-Tames","doi":"10.1016/j.compbiomed.2025.110223","DOIUrl":"10.1016/j.compbiomed.2025.110223","url":null,"abstract":"<div><h3>Background</h3><div>Effective transcranial temporal interference stimulation (tTIS) requires an optimized electrode configuration to target deep brain structures accurately. While individualized electric field analysis using high-resolution structural MRI enables precise electrode placement, its clinical practicality is limited by significant costs associated with imaging, specialized software, and navigation systems. Alternatively, standardized electrode montages optimized through population-based electric field analysis might overcome these limitations, although it remains unclear how accurately this approach approximates individualized optimization.</div></div><div><h3>Aim</h3><div>This study evaluates the feasibility of using group-level electric field analysis to optimize the tTIS montage. Specifically, it seeks to maximize the intracranial electric field using a population-proxy approach and compare its efficacy to individualized electric field optimization.</div></div><div><h3>Method</h3><div>We optimize the montage across various populations, balancing the trade-off between focality and electric field strength at deep brain targets. The method is compared to conventional individualized electric field-based optimization. Factors such as population size and age were analyzed for their impact on montage selection and effectiveness.</div></div><div><h3>Results</h3><div>Population-based electric field optimization demonstrated comparable focality and targeting accuracy to individualized analysis, with a difference of up to 17 %. Age mismatch between the population proxy and the target individual reduced the focality of up to 8.3 % compared to an age-matched population proxy. Also, insufficient population size led to inconsistencies in montage optimization, although these were negligible for populations larger than 40 individuals.</div></div><div><h3>Conclusion</h3><div>This study demonstrates the capability of population-based electric field analysis to achieve targeting effects comparable to individualized-level electric field analysis in terms of focality and intensity. By eliminating the need for patient-specific MRI scans, this approach significantly enhances the accessibility and practicality of tTIS in diverse research and clinical applications.</div></div>","PeriodicalId":10578,"journal":{"name":"Computers in biology and medicine","volume":"192 ","pages":"Article 110223"},"PeriodicalIF":7.0,"publicationDate":"2025-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143869872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Intracranial Hemorrhage (ICH) refers to cerebral bleeding resulting from ruptured blood vessels within the brain. Delayed and inaccurate diagnosis and treatment of ICH can lead to fatality or disability. Therefore, early and precise diagnosis of intracranial hemorrhage is crucial for protecting patients' lives. Automatic segmentation of hematomas in CT images can provide doctors with essential diagnostic support and improve diagnostic efficiency. CT images of intracranial hemorrhage exhibit characteristics such as multi-scale, multi-target, and blurred edges. This paper proposes a Multi-scale and Edge Feature Fusion Network (MEF-Net) to effectively extract multi-scale and edge features and fully fuse these features through a fusion mechanism. The network first extracts the multi-scale features and edge features of the image through the encoder and the edge detection module respectively, then fuses the deep information, and employs the multi-kernel attention module to process the shallow features, enhancing the multi-target recognition capability. Finally, the feature maps from each module are combined to produce the segmentation result. Experimental results indicate that this method has achieved average DICE scores of 0.7508 and 0.7443 in two public datasets respectively, surpassing those of several advanced methods in medical image segmentation currently available. The proposed MEF-Net significantly improves the accuracy of intracranial hemorrhage segmentation.
{"title":"MEF-Net: Multi-scale and edge feature fusion network for intracranial hemorrhage segmentation in CT images","authors":"Xiufeng Zhang, Shichen Zhang, Yunfei Jiang, Lingzhuo Tian","doi":"10.1016/j.compbiomed.2025.110245","DOIUrl":"10.1016/j.compbiomed.2025.110245","url":null,"abstract":"<div><div>Intracranial Hemorrhage (ICH) refers to cerebral bleeding resulting from ruptured blood vessels within the brain. Delayed and inaccurate diagnosis and treatment of ICH can lead to fatality or disability. Therefore, early and precise diagnosis of intracranial hemorrhage is crucial for protecting patients' lives. Automatic segmentation of hematomas in CT images can provide doctors with essential diagnostic support and improve diagnostic efficiency. CT images of intracranial hemorrhage exhibit characteristics such as multi-scale, multi-target, and blurred edges. This paper proposes a Multi-scale and Edge Feature Fusion Network (MEF-Net) to effectively extract multi-scale and edge features and fully fuse these features through a fusion mechanism. The network first extracts the multi-scale features and edge features of the image through the encoder and the edge detection module respectively, then fuses the deep information, and employs the multi-kernel attention module to process the shallow features, enhancing the multi-target recognition capability. Finally, the feature maps from each module are combined to produce the segmentation result. Experimental results indicate that this method has achieved average DICE scores of 0.7508 and 0.7443 in two public datasets respectively, surpassing those of several advanced methods in medical image segmentation currently available. The proposed MEF-Net significantly improves the accuracy of intracranial hemorrhage segmentation.</div></div>","PeriodicalId":10578,"journal":{"name":"Computers in biology and medicine","volume":"192 ","pages":"Article 110245"},"PeriodicalIF":7.0,"publicationDate":"2025-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143870514","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-25DOI: 10.1016/j.compbiomed.2025.110201
S. Jayaprakash, J.P. Keerthana
The healthcare sector is undergoing a profound transformation driven by the rapid rise in healthcare applications (mHealth apps), which are becoming integral to how patients manage their health. This paper examines the role of next-generation technologies such as Blockchain, the Internet of Things (IoT), Artificial Intelligence (AI) and Machine Learning (ML) in enhancing healthcare applications, specifically in telemedicine, health tracking and medical delivery. The research is motivated by the dramatic surge in mHealth app usage, particularly following the COVID-19 pandemic and the growing demand for digital solutions to improve patient care. By conducting a comprehensive analysis of existing healthcare apps, this study evaluates their functionalities, user engagement and adoption rates. It finds that integrating advanced technologies significantly improves the user experience, enhances operational efficiency and increases adoption rates among patients and healthcare providers. These technologies facilitate appointment scheduling, health monitoring and access to medical records, ultimately enabling users to manage wellness goals and illnesses more effectively. Furthermore, they streamline healthcare operations, making them more efficient and cost-effective. The paper highlights the transformative potential of integrating these technologies into healthcare apps, which can greatly improve patient care outcomes and pave the way for future innovations in digital health solutions. Through qualitative and quantitative assessments, this study provides valuable insights for developers and healthcare professionals looking to optimize the effectiveness and adoption of digital health applications.
{"title":"Real-time health monitoring by examining the role of next-generation elements in a medical app","authors":"S. Jayaprakash, J.P. Keerthana","doi":"10.1016/j.compbiomed.2025.110201","DOIUrl":"10.1016/j.compbiomed.2025.110201","url":null,"abstract":"<div><div>The healthcare sector is undergoing a profound transformation driven by the rapid rise in healthcare applications (mHealth apps), which are becoming integral to how patients manage their health. This paper examines the role of next-generation technologies such as Blockchain, the Internet of Things (IoT), Artificial Intelligence (AI) and Machine Learning (ML) in enhancing healthcare applications, specifically in telemedicine, health tracking and medical delivery. The research is motivated by the dramatic surge in mHealth app usage, particularly following the COVID-19 pandemic and the growing demand for digital solutions to improve patient care. By conducting a comprehensive analysis of existing healthcare apps, this study evaluates their functionalities, user engagement and adoption rates. It finds that integrating advanced technologies significantly improves the user experience, enhances operational efficiency and increases adoption rates among patients and healthcare providers. These technologies facilitate appointment scheduling, health monitoring and access to medical records, ultimately enabling users to manage wellness goals and illnesses more effectively. Furthermore, they streamline healthcare operations, making them more efficient and cost-effective. The paper highlights the transformative potential of integrating these technologies into healthcare apps, which can greatly improve patient care outcomes and pave the way for future innovations in digital health solutions. Through qualitative and quantitative assessments, this study provides valuable insights for developers and healthcare professionals looking to optimize the effectiveness and adoption of digital health applications.</div></div>","PeriodicalId":10578,"journal":{"name":"Computers in biology and medicine","volume":"192 ","pages":"Article 110201"},"PeriodicalIF":7.0,"publicationDate":"2025-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143873293","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-25DOI: 10.1016/j.compbiomed.2025.110200
Samir Abdel-Rahman , Pavel Antiperovitch , Anthony Tang , Mohammad I. Daoud , Vijay Parsa , James C. Lacefield
In electrocardiography (ECG), measurement of QRS duration (QRSd) is crucial for diagnosing conditions such as left bundle branch block. To address the limited availability of ECG databases with QRS delineation labels, we present a method to use small databases to train deep learning object detection models for global QRSd estimation that involves minimal manual labeling of median beats. In our method, an ECG record is segmented into individual heartbeats, transformed into artificial images, and a Faster R-CNN model is utilized to estimate the global QRSd. Faster R-CNN models were tested with three different backbone configurations (VGG-16, VGG-19, and RESNET-18) and two ECG image formats: binary images in which each beat in each lead was represented by a separate image and RGB images in which the same beat from a trio of leads was superimposed by mapping each lead to a different color channel. Using 258 twelve-lead, 10-s digital ECG records acquired from 140 unique heart failure outpatients, the best-performing backbone, VGG-19 with RGB images, achieved root-mean-square and mean absolute errors for QRSd of 10.4 ± 0.8 ms and 8.2 ± 1.0 ms, respectively, during five-fold cross-validation. Testing with an independent, publicly available dataset yielded root-mean-square and mean absolute errors for QRSd of 7.0 ± 1.1 ms and 5.3 ± 0.9 ms, respectively. Therefore, our method provides high QRSd estimation accuracy while reducing the need for manual labeling and shows promise for generalization to independent databases, demonstrating potential for efficient training of deep learning models on small ECG databases.
{"title":"Faster R-CNN approach for estimating global QRS duration in electrocardiograms with a limited quantity of annotated data","authors":"Samir Abdel-Rahman , Pavel Antiperovitch , Anthony Tang , Mohammad I. Daoud , Vijay Parsa , James C. Lacefield","doi":"10.1016/j.compbiomed.2025.110200","DOIUrl":"10.1016/j.compbiomed.2025.110200","url":null,"abstract":"<div><div>In electrocardiography (ECG), measurement of QRS duration (QRSd) is crucial for diagnosing conditions such as left bundle branch block. To address the limited availability of ECG databases with QRS delineation labels, we present a method to use small databases to train deep learning object detection models for global QRSd estimation that involves minimal manual labeling of median beats. In our method, an ECG record is segmented into individual heartbeats, transformed into artificial images, and a Faster R-CNN model is utilized to estimate the global QRSd. Faster R-CNN models were tested with three different backbone configurations (VGG-16, VGG-19, and RESNET-18) and two ECG image formats: binary images in which each beat in each lead was represented by a separate image and RGB images in which the same beat from a trio of leads was superimposed by mapping each lead to a different color channel. Using 258 twelve-lead, 10-s digital ECG records acquired from 140 unique heart failure outpatients, the best-performing backbone, VGG-19 with RGB images, achieved root-mean-square and mean absolute errors for QRSd of 10.4 ± 0.8 ms and 8.2 ± 1.0 ms, respectively, during five-fold cross-validation. Testing with an independent, publicly available dataset yielded root-mean-square and mean absolute errors for QRSd of 7.0 ± 1.1 ms and 5.3 ± 0.9 ms, respectively. Therefore, our method provides high QRSd estimation accuracy while reducing the need for manual labeling and shows promise for generalization to independent databases, demonstrating potential for efficient training of deep learning models on small ECG databases.</div></div>","PeriodicalId":10578,"journal":{"name":"Computers in biology and medicine","volume":"192 ","pages":"Article 110200"},"PeriodicalIF":7.0,"publicationDate":"2025-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143870512","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Analyzing the connectome of an organism allows us to understand how different areas of its brain communicate with each other and how the structure of the brain is related to its function. Thanks to new technological advances, the connectome of increasingly complex organisms has been reconstructed in recent years. Drosophila melanogaster is currently the most complex organism whose complete connectome is known, both structurally and functionally. In this paper, we aim to contribute to the study of the Drosophila structural connectome by proposing an ad hoc approach for the discovery of network motifs that may be present in it. Unlike previous approaches, which focused on parts of the connectome of complex organisms or the entire connectome of very simple organisms, our approach operates at the whole-brain scale for the most complex organism whose complete connectome is currently known. Furthermore, while previous works have focused on extending existing motif extraction approaches to the connectome case, our approach proposes a motif concept specifically designed for the connectome of an organism. This allows us to find very complex motifs while abstracting them into a few simple types that take into account the brain regions to which the neurons involved belong.
{"title":"A complex network-based approach to detect and investigate connectome motifs in the larval Drosophila","authors":"Enrico Corradini , Federica Parlapiano , Arianna Ronci , Giorgio Terracina , Domenico Ursino","doi":"10.1016/j.compbiomed.2025.110135","DOIUrl":"10.1016/j.compbiomed.2025.110135","url":null,"abstract":"<div><div>Analyzing the connectome of an organism allows us to understand how different areas of its brain communicate with each other and how the structure of the brain is related to its function. Thanks to new technological advances, the connectome of increasingly complex organisms has been reconstructed in recent years. Drosophila melanogaster is currently the most complex organism whose complete connectome is known, both structurally and functionally. In this paper, we aim to contribute to the study of the Drosophila structural connectome by proposing an ad hoc approach for the discovery of network motifs that may be present in it. Unlike previous approaches, which focused on parts of the connectome of complex organisms or the entire connectome of very simple organisms, our approach operates at the whole-brain scale for the most complex organism whose complete connectome is currently known. Furthermore, while previous works have focused on extending existing motif extraction approaches to the connectome case, our approach proposes a motif concept specifically designed for the connectome of an organism. This allows us to find very complex motifs while abstracting them into a few simple types that take into account the brain regions to which the neurons involved belong.</div></div>","PeriodicalId":10578,"journal":{"name":"Computers in biology and medicine","volume":"192 ","pages":"Article 110135"},"PeriodicalIF":7.0,"publicationDate":"2025-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143870513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-24DOI: 10.1016/j.compbiomed.2025.110213
Erman Kibritoglu, Heba Yuksel
Background:
Classical methods for speeding up fracture healing usually rely on direct electrical stimulation and electromagnetic fields to boost the levels of growth factors at the fracture site. However, these techniques often concentrate on bone cells themselves rather than addressing the critical blood flow dynamics necessary for effective healing. This study introduces a mathematical model designed to explore the potential of dielectrophoretic forces (DEPFs) in improving blood flow at the fracture site. By adjusting blood flow, the model seeks to enhance the delivery of vital nutrients, hormones, and growth factors, including endothelial cells (ECs), vascular endothelial growth factor (VEGF) and oxygen, which are essential for accelerating the fracture healing process.
Method:
The proposed approach includes a new technique, termed the S method, which assesses the non-uniformity of DEPFs by algebraically analyzing the electric field lines associated with positive and negative dielectrophoresis. We developed analytical equations to simulate various coil configurations, focusing on long bone fractures where blood flow is vertically oriented. The DEPF Factor (DEPF) was used to measure the ratio of blood flow velocity in the presence of DEPFs compared to the absence of DEPFs, thus indicating the effectiveness of DEPF in enhancing blood flow.
Results:
The simulation results revealed that DEPF reaches its peak efficacy at the gamma dispersion band, with the most significant enhancement occurring at a frequency of 15 MHz. Specifically, the average values of DEPF were 1.8, 3.2, and 7.9 for the catenary, lintearia, and valeria coils, respectively. Our computational model, which incorporated VEGF, ECs, and oxygen tension, demonstrated that the catenary coil slightly improved healing rates in impaired fractures, the lintearia coil normalized healing times between impaired and normal fractures, and the valeria coil not only accelerated healing in impaired fractures but also enhanced healing in normal fractures.
Conclusions:
This paper’s findings suggest that the valeria coil exhibits the best DEPF functionality, making it the optimal configuration for future experimental studies aimed at evaluating the efficacy of DEPF in promoting fracture healing. The ability of DEPFs to significantly enhance blood flow could represent a substantial advancement in the treatment of both normal and impaired fractures.
{"title":"Numerical analysis of coil designs to expedite fracture healing using dielectrophoresis with S method","authors":"Erman Kibritoglu, Heba Yuksel","doi":"10.1016/j.compbiomed.2025.110213","DOIUrl":"10.1016/j.compbiomed.2025.110213","url":null,"abstract":"<div><h3>Background:</h3><div>Classical methods for speeding up fracture healing usually rely on direct electrical stimulation and electromagnetic fields to boost the levels of growth factors at the fracture site. However, these techniques often concentrate on bone cells themselves rather than addressing the critical blood flow dynamics necessary for effective healing. This study introduces a mathematical model designed to explore the potential of dielectrophoretic forces (DEPFs) in improving blood flow at the fracture site. By adjusting blood flow, the model seeks to enhance the delivery of vital nutrients, hormones, and growth factors, including endothelial cells (ECs), vascular endothelial growth factor (VEGF) and oxygen, which are essential for accelerating the fracture healing process.</div></div><div><h3>Method:</h3><div>The proposed approach includes a new technique, termed the S method, which assesses the non-uniformity of DEPFs by algebraically analyzing the electric field lines associated with positive and negative dielectrophoresis. We developed analytical equations to simulate various coil configurations, focusing on long bone fractures where blood flow is vertically oriented. The DEPF Factor (<span><math><mi>χ</mi></math></span>DEPF) was used to measure the ratio of blood flow velocity in the presence of DEPFs compared to the absence of DEPFs, thus indicating the effectiveness of DEPF in enhancing blood flow.</div></div><div><h3>Results:</h3><div>The simulation results revealed that DEPF reaches its peak efficacy at the gamma dispersion band, with the most significant enhancement occurring at a frequency of 15 MHz. Specifically, the average values of <span><math><mi>χ</mi></math></span>DEPF were 1.8, 3.2, and 7.9 for the catenary, lintearia, and valeria coils, respectively. Our computational model, which incorporated VEGF, ECs, and oxygen tension, demonstrated that the catenary coil slightly improved healing rates in impaired fractures, the lintearia coil normalized healing times between impaired and normal fractures, and the valeria coil not only accelerated healing in impaired fractures but also enhanced healing in normal fractures.</div></div><div><h3>Conclusions:</h3><div>This paper’s findings suggest that the valeria coil exhibits the best DEPF functionality, making it the optimal configuration for future experimental studies aimed at evaluating the efficacy of DEPF in promoting fracture healing. The ability of DEPFs to significantly enhance blood flow could represent a substantial advancement in the treatment of both normal and impaired fractures.</div></div>","PeriodicalId":10578,"journal":{"name":"Computers in biology and medicine","volume":"192 ","pages":"Article 110213"},"PeriodicalIF":7.0,"publicationDate":"2025-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143870510","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-24DOI: 10.1016/j.compbiomed.2025.110247
Robert Surma , Danuta Wojcieszyńska , Sikandar I. Mulla , Urszula Guzik
This article describes genetic algorithms (GAs), a widely used group of nature-inspired metaheuristics, and presents examples of their application in model-free optimization of bioprocesses. This approach is mainly used to solve optimization problems expressed through mathematical models. However, there are many situations in which laboratory optimization with GAs can be performed. In many cases, GAs have been reported to be superior to other popular optimization methods. Hence, their use is particularly recommended when multiple variables need to be studied simultaneously, the search space is large, and/or little is known about the interactions between individual factors. Despite their usefulness and simplicity, the number of reported experimental examples of non-model-based optimization using GAs remains limited. Real-world experimental evaluations, as opposed to mathematical fitness functions, are neither classified nor explicitly defined in the literature. The authors propose the term “Reality-Based Genetic Algorithms” and express hope for its widespread adoption. There is a significant need for both theoretical and empirical research on the parameter configurations of genetic algorithms for experimental optimization, and the authors anticipate that this gap will be addressed in the future. In the meantime, it is recommended to either use configurations that have been proven successful in similar studies or to experiment with different configurations to generate comparative data for future research.
{"title":"Current strategy of non-model-based bioprocess optimizations with genetic algorithms in bioscience - A systematic review","authors":"Robert Surma , Danuta Wojcieszyńska , Sikandar I. Mulla , Urszula Guzik","doi":"10.1016/j.compbiomed.2025.110247","DOIUrl":"10.1016/j.compbiomed.2025.110247","url":null,"abstract":"<div><div>This article describes genetic algorithms (GAs), a widely used group of nature-inspired metaheuristics, and presents examples of their application in model-free optimization of bioprocesses. This approach is mainly used to solve optimization problems expressed through mathematical models. However, there are many situations in which laboratory optimization with GAs can be performed. In many cases, GAs have been reported to be superior to other popular optimization methods. Hence, their use is particularly recommended when multiple variables need to be studied simultaneously, the search space is large, and/or little is known about the interactions between individual factors. Despite their usefulness and simplicity, the number of reported experimental examples of non-model-based optimization using GAs remains limited. Real-world experimental evaluations, as opposed to mathematical fitness functions, are neither classified nor explicitly defined in the literature. The authors propose the term “Reality-Based Genetic Algorithms” and express hope for its widespread adoption. There is a significant need for both theoretical and empirical research on the parameter configurations of genetic algorithms for experimental optimization, and the authors anticipate that this gap will be addressed in the future. In the meantime, it is recommended to either use configurations that have been proven successful in similar studies or to experiment with different configurations to generate comparative data for future research.</div></div>","PeriodicalId":10578,"journal":{"name":"Computers in biology and medicine","volume":"192 ","pages":"Article 110247"},"PeriodicalIF":7.0,"publicationDate":"2025-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143863696","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}