Pub Date : 2024-10-28DOI: 10.1016/j.cmpb.2024.108471
Zhenya Zang, Quan Wang, Mingliang Pan, Yuanzhe Zhang, Xi Chen, Xingda Li, David Day Uei Li
This study proposes a compact deep learning (DL) architecture and a highly parallelized computing hardware platform to reconstruct the blood flow index (BFi) in diffuse correlation spectroscopy (DCS). We leveraged a rigorous analytical model to generate autocorrelation functions (ACFs) to train the DL network. We assessed the accuracy of the proposed DL using simulated and milk phantom data. Compared to convolutional neural networks (CNN), our lightweight DL architecture achieves 66.7% and 18.5% improvement in MSE for BFi and the coherence factor β, using synthetic data evaluation. The accuracy of rBFi over different algorithms was also investigated. We further simplified the DL computing primitives using subtraction for feature extraction, considering further hardware implementation. We extensively explored computing parallelism and fixed-point quantization within the DL architecture. With the DL model's compact size, we employed unrolling and pipelining optimizations for computation-intensive for-loops in the DL model while storing all learned parameters in on-chip BRAMs. We also achieved pixel-wise parallelism, enabling simultaneous, real-time processing of 10 and 15 autocorrelation functions on Zynq-7000 and Zynq-UltraScale+ field programmable gate array (FPGA), respectively. Unlike existing FPGA accelerators that produce BFi and the β from autocorrelation functions on standalone hardware, our approach is an encapsulated, end-to-end on-chip conversion process from intensity photon data to the temporal intensity ACF and subsequently reconstructing BFi and β. This hardware platform achieves an on-chip solution to replace post-processing and miniaturize modern DCS systems that use single-photon cameras. We also comprehensively compared the computational efficiency of our FPGA accelerator to CPU and GPU solutions.
{"title":"Towards high-performance deep learning architecture and hardware accelerator design for robust analysis in diffuse correlation spectroscopy","authors":"Zhenya Zang, Quan Wang, Mingliang Pan, Yuanzhe Zhang, Xi Chen, Xingda Li, David Day Uei Li","doi":"10.1016/j.cmpb.2024.108471","DOIUrl":"10.1016/j.cmpb.2024.108471","url":null,"abstract":"<div><div>This study proposes a compact deep learning (DL) architecture and a highly parallelized computing hardware platform to reconstruct the blood flow index (BFi) in diffuse correlation spectroscopy (DCS). We leveraged a rigorous analytical model to generate autocorrelation functions (ACFs) to train the DL network. We assessed the accuracy of the proposed DL using simulated and milk phantom data. Compared to convolutional neural networks (CNN), our lightweight DL architecture achieves 66.7% and 18.5% improvement in MSE for BFi and the coherence factor <em>β</em>, using synthetic data evaluation. The accuracy of rBFi over different algorithms was also investigated. We further simplified the DL computing primitives using subtraction for feature extraction, considering further hardware implementation. We extensively explored computing parallelism and fixed-point quantization within the DL architecture. With the DL model's compact size, we employed unrolling and pipelining optimizations for computation-intensive for-loops in the DL model while storing all learned parameters in on-chip BRAMs. We also achieved pixel-wise parallelism, enabling simultaneous, real-time processing of 10 and 15 autocorrelation functions on Zynq-7000 and Zynq-UltraScale+ field programmable gate array (FPGA), respectively. Unlike existing FPGA accelerators that produce BFi and the <em>β</em> from autocorrelation functions on standalone hardware, our approach is an encapsulated, end-to-end on-chip conversion process from intensity photon data to the temporal intensity ACF and subsequently reconstructing BFi and <em>β</em>. This hardware platform achieves an on-chip solution to replace post-processing and miniaturize modern DCS systems that use single-photon cameras. We also comprehensively compared the computational efficiency of our FPGA accelerator to CPU and GPU solutions.</div></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":"258 ","pages":"Article 108471"},"PeriodicalIF":4.9,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142616209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
To achieve our aim, two new modules have been implemented in the software. The first module simulated the TandemHeart™ pump in RVAD configuration, both as a right atrial-pulmonary arterial and a right ventricular-pulmonary arterial connection, driven by four different rotational speeds. The second module reproduced the behaviour of the ProtekDuo™ cannula plus TandemHeart™.
Results
The effects induced on the main haemodynamic and energetic variables were analysed for both the right atrial-pulmonary arterial and right ventricular-pulmonary arterial configuration with different pump rotational speed and following Milrinone administration. The TandemHeart™ increased right ventricular end systolic volume by 10 %, larger increases were evident for higher speeds (6000 and 7500 rpm) and connections with 21-Fr inflow and 17-Fr outflow cannula, respectively. Both TandemHeart™ and ProtekDuo™ support increased left ventricular preload. When different RVAD settings were used, Milrinone therapy increased the left ventricular pressure-volume area and decreased the right pressure-volume area slightly. A reduction in oxygen consumption (demand) was observed with reduced right stroke work and pressure volume area and increased oxygen supply (coronary blood flow).
Conclusions
The outcome of our simulations confirms the effective haemodynamic assistance provided by the ProtekDuo™ as observed in the acute clinical setting. A simulation approach based on pressure-volume analysis combined with modified time-varying elastance and lumped-parameter modelling remains a suitable tool for clinical applications.
Pub Date : 2024-10-28DOI: 10.1016/j.cmpb.2024.108480
Emmanuel Eghan-Acquah , Alireza Y Bavil , David Bade , Martina Barzan , Azadeh Nasseri , David J Saxby , Stefanie Feih , Christopher P Carty
Proximal femoral osteotomy (PFO) is a frequently performed surgical procedure to correct hip deformities in the paediatric population. The optimal size of the blade plate implant in PFO is a critical but underexplored factor influencing biomechanical outcomes. This study introduces a novel approach to refine implant selection by integrating personalized neuromusculoskeletal modelling with finite element analysis. Using computed tomography scans and walking gait data from six paediatric patients with various pathologies and deformities, we assessed the impact of four distinct implant width-to-femoral neck diameter (W-D) ratios (30 %, 40 %, 50 %, and 60 %) on surgical outcomes. The results show that the risk of implant yield generally decreases with increasing W-D ratio, except for Patient P2, where the yield risk remained below 100 % across all ratios. The implant factor of safety (FoS) increased with larger W-D ratios, except for Patients P2 and P6, where the highest FoS was 2.60 (P2) and 0.49 (P6) at a 60 % W-D ratio. Bone-implant micromotion consistently remained below 40 µm at higher W-D ratios, with a 50 % W-D ratio striking the optimal balance for mechanical stability in all patients except P6. Although interfragmentary and principal femoral strains did not display consistent trends across all patients, they highlight the need for patient-specific approaches to ensure effective fracture healing. These findings highlight the importance of patient-specific considerations in implant selection, offering surgeons a more informed pathway to enhance patient outcomes and extend implant longevity. Additionally, the insights gained from this study provide valuable guidance for manufacturers in designing next-generation blade plates tailored to improve biomechanical performance in paediatric orthopaedics.
{"title":"Enhancing biomechanical outcomes in proximal femoral osteotomy through optimised blade plate sizing: A neuromusculoskeletal-informed finite element analysis","authors":"Emmanuel Eghan-Acquah , Alireza Y Bavil , David Bade , Martina Barzan , Azadeh Nasseri , David J Saxby , Stefanie Feih , Christopher P Carty","doi":"10.1016/j.cmpb.2024.108480","DOIUrl":"10.1016/j.cmpb.2024.108480","url":null,"abstract":"<div><div>Proximal femoral osteotomy (PFO) is a frequently performed surgical procedure to correct hip deformities in the paediatric population. The optimal size of the blade plate implant in PFO is a critical but underexplored factor influencing biomechanical outcomes. This study introduces a novel approach to refine implant selection by integrating personalized neuromusculoskeletal modelling with finite element analysis. Using computed tomography scans and walking gait data from six paediatric patients with various pathologies and deformities, we assessed the impact of four distinct implant width-to-femoral neck diameter (W-D) ratios (30 %, 40 %, 50 %, and 60 %) on surgical outcomes. The results show that the risk of implant yield generally decreases with increasing W-D ratio, except for Patient P2, where the yield risk remained below 100 % across all ratios. The implant factor of safety (FoS) increased with larger W-D ratios, except for Patients P2 and P6, where the highest FoS was 2.60 (P2) and 0.49 (P6) at a 60 % W-D ratio. Bone-implant micromotion consistently remained below 40 µm at higher W-D ratios, with a 50 % W-D ratio striking the optimal balance for mechanical stability in all patients except P6. Although interfragmentary and principal femoral strains did not display consistent trends across all patients, they highlight the need for patient-specific approaches to ensure effective fracture healing. These findings highlight the importance of patient-specific considerations in implant selection, offering surgeons a more informed pathway to enhance patient outcomes and extend implant longevity. Additionally, the insights gained from this study provide valuable guidance for manufacturers in designing next-generation blade plates tailored to improve biomechanical performance in paediatric orthopaedics.</div></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":"257 ","pages":"Article 108480"},"PeriodicalIF":4.9,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142567736","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-26DOI: 10.1016/j.cmpb.2024.108481
Chenxi Sun, Zhi-Ping Liu
Background and objective
Immunotherapy holds promise in enhancing pathological complete response rates in breast cancer, albeit confined to a select cohort of patients. Consequently, pinpointing factors predictive of treatment responsiveness is of paramount importance. Gene expression and regulation, inherently operating within intricate networks, constitute fundamental molecular machinery for cellular processes and often serve as robust biomarkers. Nevertheless, contemporary feature selection approaches grapple with two key challenges: opacity in modeling and scarcity in accounting for gene-gene interactions
Methods
To address these limitations, we devise a novel feature selection methodology grounded in cooperative game theory, harmoniously integrating with sophisticated machine learning models. This approach identifies interconnected gene regulatory network biomarker modules with priori genetic linkage architecture. Specifically, we leverage Shapley values on network to quantify feature importance, while strategically constraining their integration based on network expansion principles and nodal adjacency, thereby fostering enhanced interpretability in feature selection. We apply our methods to a publicly available single-cell RNA sequencing dataset of breast cancer immunotherapy responses, using the identified feature gene set as biomarkers. Functional enrichment analysis with independent validations further illustrates their effective predictive performance
Results
We demonstrate the sophistication and excellence of the proposed method in data with network structure. It unveiled a cohesive biomarker module encompassing 27 genes for immunotherapy response. Notably, this module proves adept at precisely predicting anti-PD1 therapeutic outcomes in breast cancer patients with classification accuracy of 0.905 and AUC value of 0.971, underscoring its unique capacity to illuminate gene functionalities
Conclusion
The proposed method is effective for identifying network module biomarkers, and the detected anti-PD1 response biomarkers can enrich our understanding of the underlying physiological mechanisms of immunotherapy, which have a promising application for realizing precision medicine.
{"title":"Discovering explainable biomarkers for breast cancer anti-PD1 response via network Shapley value analysis","authors":"Chenxi Sun, Zhi-Ping Liu","doi":"10.1016/j.cmpb.2024.108481","DOIUrl":"10.1016/j.cmpb.2024.108481","url":null,"abstract":"<div><h3>Background and objective</h3><div>Immunotherapy holds promise in enhancing pathological complete response rates in breast cancer, albeit confined to a select cohort of patients. Consequently, pinpointing factors predictive of treatment responsiveness is of paramount importance. Gene expression and regulation, inherently operating within intricate networks, constitute fundamental molecular machinery for cellular processes and often serve as robust biomarkers. Nevertheless, contemporary feature selection approaches grapple with two key challenges: opacity in modeling and scarcity in accounting for gene-gene interactions</div></div><div><h3>Methods</h3><div>To address these limitations, we devise a novel feature selection methodology grounded in cooperative game theory, harmoniously integrating with sophisticated machine learning models. This approach identifies interconnected gene regulatory network biomarker modules with priori genetic linkage architecture. Specifically, we leverage Shapley values on network to quantify feature importance, while strategically constraining their integration based on network expansion principles and nodal adjacency, thereby fostering enhanced interpretability in feature selection. We apply our methods to a publicly available single-cell RNA sequencing dataset of breast cancer immunotherapy responses, using the identified feature gene set as biomarkers. Functional enrichment analysis with independent validations further illustrates their effective predictive performance</div></div><div><h3>Results</h3><div>We demonstrate the sophistication and excellence of the proposed method in data with network structure. It unveiled a cohesive biomarker module encompassing 27 genes for immunotherapy response. Notably, this module proves adept at precisely predicting anti-PD1 therapeutic outcomes in breast cancer patients with classification accuracy of 0.905 and AUC value of 0.971, underscoring its unique capacity to illuminate gene functionalities</div></div><div><h3>Conclusion</h3><div>The proposed method is effective for identifying network module biomarkers, and the detected anti-PD1 response biomarkers can enrich our understanding of the underlying physiological mechanisms of immunotherapy, which have a promising application for realizing precision medicine.</div></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":"257 ","pages":"Article 108481"},"PeriodicalIF":4.9,"publicationDate":"2024-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142564115","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-26DOI: 10.1016/j.cmpb.2024.108479
Junqi Wang , Hailong Li , Kim M Cecil , Mekibib Altaye , Nehal A Parikh , Lili He
Background and Objective
Very preterm infants are susceptible to neurodevelopmental impairments, necessitating early detection of prognostic biomarkers for timely intervention. The study aims to explore possible functional biomarkers for very preterm infants at born that relate to their future cognitive and motor development using resting-state fMRI. Prior studies are limited by the sample size and suffer from efficient functional connectome (FC) construction algorithms that can handle the noisy data contained in neonatal time series, leading to equivocal findings. Therefore, we first propose an enhanced functional connectome construction algorithm as a prerequisite step. We then apply the new FC construction algorithm to our large prospective very preterm cohort to explore multi-level neurodevelopmental biomarkers.
Methods
There exists an intrinsic relationship between the structural connectome (SC) and FC, with a notable coupling between the two. This observation implies a putative property of graph signal smoothness on the SC as well. Yet, this property has not been fully exploited for constructing intrinsic dFC. In this study, we proposed an advanced dynamic FC (dFC) learning model, dFC-Igloo, which leveraged SC information to iteratively refine dFC estimations by applying graph signal smoothness to both FC and SC. The model was evaluated on artificial small-world graphs and simulated graph signals.
Results
The proposed model achieved the best and most robust recovery of the ground truth graph across different noise levels and simulated SC pairs from the simulation. The model was further applied to a cohort of very preterm infants from five Neonatal Intensive Care Units, where an enhanced dFC was obtained for each infant. Based on the improved dFC, we identified neurodevelopmental biomarkers for neonates across connectome-wide, regional, and subnetwork scales.
Conclusion
The identified markers correlate with cognitive and motor developmental outcomes, offering insights into early brain development and potential neurodevelopmental challenges.
{"title":"DFC-Igloo: A dynamic functional connectome learning framework for identifying neurodevelopmental biomarkers in very preterm infants","authors":"Junqi Wang , Hailong Li , Kim M Cecil , Mekibib Altaye , Nehal A Parikh , Lili He","doi":"10.1016/j.cmpb.2024.108479","DOIUrl":"10.1016/j.cmpb.2024.108479","url":null,"abstract":"<div><h3>Background and Objective</h3><div>Very preterm infants are susceptible to neurodevelopmental impairments, necessitating early detection of prognostic biomarkers for timely intervention. The study aims to explore possible functional biomarkers for very preterm infants at born that relate to their future cognitive and motor development using resting-state fMRI. Prior studies are limited by the sample size and suffer from efficient functional connectome (FC) construction algorithms that can handle the noisy data contained in neonatal time series, leading to equivocal findings. Therefore, we first propose an enhanced functional connectome construction algorithm as a prerequisite step. We then apply the new FC construction algorithm to our large prospective very preterm cohort to explore multi-level neurodevelopmental biomarkers.</div></div><div><h3>Methods</h3><div>There exists an intrinsic relationship between the structural connectome (SC) and FC, with a notable coupling between the two. This observation implies a putative property of graph signal smoothness on the SC as well. Yet, this property has not been fully exploited for constructing intrinsic dFC. In this study, we proposed an advanced dynamic FC (dFC) learning model, dFC-Igloo, which leveraged SC information to iteratively refine dFC estimations by applying graph signal smoothness to both FC and SC. The model was evaluated on artificial small-world graphs and simulated graph signals.</div></div><div><h3>Results</h3><div>The proposed model achieved the best and most robust recovery of the ground truth graph across different noise levels and simulated SC pairs from the simulation. The model was further applied to a cohort of very preterm infants from five Neonatal Intensive Care Units, where an enhanced dFC was obtained for each infant. Based on the improved dFC, we identified neurodevelopmental biomarkers for neonates across connectome-wide, regional, and subnetwork scales.</div></div><div><h3>Conclusion</h3><div>The identified markers correlate with cognitive and motor developmental outcomes, offering insights into early brain development and potential neurodevelopmental challenges.</div></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":"257 ","pages":"Article 108479"},"PeriodicalIF":4.9,"publicationDate":"2024-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142567734","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-25DOI: 10.1016/j.cmpb.2024.108466
Jianye Shi , Kiran Manjunatha , Felix Vogt , Stefanie Reese
Background:
The intricate process of coronary in-stent restenosis (ISR) involves the interplay between different mediators, including platelet-derived growth factor, transforming growth factor-, extracellular matrix, smooth muscle cells, endothelial cells, and drug elution from the stent. Modeling such complex multiphysics phenomena demands extensive computational resources and time.
Methods:
This paper proposes a novel non-intrusive data-driven reduced order modeling approach for the underlying multiphysics time-dependent parametrized problem. In the offline phase, a 3D convolutional autoencoder, comprising an encoder and decoder, is trained to achieve dimensionality reduction. The encoder condenses the full-order solution into a lower-dimensional latent space, while the decoder facilitates the reconstruction of the full solution from the latent space. To deal with the 5D input datasets (3D geometry + time series + multiple output channels), two ingredients are explored. The first approach incorporates time as an additional parameter and applies 3D convolution on individual time steps, encoding a distinct latent variable for each parameter instance within each time step. The second approach reshapes the 3D geometry into a 2D plane along a less interactive axis and stacks all time steps in the third direction for each parameter instance. This rearrangement generates a larger and complete dataset for one parameter instance, resulting in a singular latent variable across the entire discrete time-series. In both approaches, the multiple outputs are considered automatically in the convolutions. Moreover, Gaussian process regression is applied to establish correlations between the latent variable and the input parameter.
Results:
The constitutive model reveals a significant acceleration in neointimal growth between days post percutaneous coronary intervention (PCI). The surrogate models applying both approaches exhibit high accuracy in pointwise error, with the first approach showcasing smaller errors across the entire evaluation period for all outputs. The parameter study on drug dosage against ISR rates provides noteworthy insights of neointimal growth, where the nonlinear dependence of ISR rates on the peak drug flux exhibits intriguing periodic patterns. Applying the trained model, the rate of ISR is effectively evaluated, and the optimal parameter range for drug dosage is identified.
Conclusion:
The demonstrated non-intrusive reduced order surrogate model proves to be a powerful tool for predicting ISR outcomes. Moreover, the proposed method lays the foundation for real-time simulations and optimization of PCI parameters.
{"title":"Data-driven reduced order surrogate modeling for coronary in-stent restenosis","authors":"Jianye Shi , Kiran Manjunatha , Felix Vogt , Stefanie Reese","doi":"10.1016/j.cmpb.2024.108466","DOIUrl":"10.1016/j.cmpb.2024.108466","url":null,"abstract":"<div><h3>Background:</h3><div>The intricate process of coronary in-stent restenosis (ISR) involves the interplay between different mediators, including platelet-derived growth factor, transforming growth factor-<span><math><mi>β</mi></math></span>, extracellular matrix, smooth muscle cells, endothelial cells, and drug elution from the stent. Modeling such complex multiphysics phenomena demands extensive computational resources and time.</div></div><div><h3>Methods:</h3><div>This paper proposes a novel non-intrusive data-driven reduced order modeling approach for the underlying multiphysics time-dependent parametrized problem. In the offline phase, a 3D convolutional autoencoder, comprising an encoder and decoder, is trained to achieve dimensionality reduction. The encoder condenses the full-order solution into a lower-dimensional latent space, while the decoder facilitates the reconstruction of the full solution from the latent space. To deal with the 5D input datasets (3D geometry + time series + multiple output channels), two ingredients are explored. The first approach incorporates time as an additional parameter and applies 3D convolution on individual time steps, encoding a distinct latent variable for each parameter instance within each time step. The second approach reshapes the 3D geometry into a 2D plane along a less interactive axis and stacks all time steps in the third direction for each parameter instance. This rearrangement generates a larger and complete dataset for one parameter instance, resulting in a singular latent variable across the entire discrete time-series. In both approaches, the multiple outputs are considered automatically in the convolutions. Moreover, Gaussian process regression is applied to establish correlations between the latent variable and the input parameter.</div></div><div><h3>Results:</h3><div>The constitutive model reveals a significant acceleration in neointimal growth between <span><math><mrow><mn>30</mn><mo>−</mo><mn>60</mn></mrow></math></span> days post percutaneous coronary intervention (PCI). The surrogate models applying both approaches exhibit high accuracy in pointwise error, with the first approach showcasing smaller errors across the entire evaluation period for all outputs. The parameter study on drug dosage against ISR rates provides noteworthy insights of neointimal growth, where the nonlinear dependence of ISR rates on the peak drug flux exhibits intriguing periodic patterns. Applying the trained model, the rate of ISR is effectively evaluated, and the optimal parameter range for drug dosage is identified.</div></div><div><h3>Conclusion:</h3><div>The demonstrated non-intrusive reduced order surrogate model proves to be a powerful tool for predicting ISR outcomes. Moreover, the proposed method lays the foundation for real-time simulations and optimization of PCI parameters.</div></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":"257 ","pages":"Article 108466"},"PeriodicalIF":4.9,"publicationDate":"2024-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142564112","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-24DOI: 10.1016/j.cmpb.2024.108456
Abouzar Kaboudian , Richard A. Gray , Ilija Uzelac , Elizabeth M. Cherry , Flavio. H. Fenton
Background and Objective:
Numerical simulations are valuable tools for studying cardiac arrhythmias. Not only do they complement experimental studies, but there is also an increasing expectation for their use in clinical applications to guide patient-specific procedures. However, numerical studies that solve the reaction–diffusion equations describing cardiac electrical activity remain challenging to set up, are time-consuming, and in many cases, are prohibitively computationally expensive for long studies. The computational cost of cardiac simulations of complex models on anatomically accurate structures necessitates parallel computing. Graphics processing units (GPUs), which have thousands of cores, have been introduced as a viable technology for carrying out fast cardiac simulations, sometimes including real-time interactivity. Our main objective is to increase the performance and accuracy of such GPU implementations while conserving computational resources.
Methods:
In this work, we present a compression algorithm that can be used to conserve GPU memory and improve efficiency by managing the sparsity that is inherent in using Cartesian grids to represent cardiac structures directly obtained from high-resolution MRI and mCT scans. Furthermore, we present a discretization scheme that includes the cross-diagonal terms in the computational cell to increase numerical accuracy, which is especially important for simulating thin tissue sections without the need for costly mesh refinement.
Results:
Interactive WebGL simulations of atrial/ventricular structures (on PCs, laptops, tablets, and phones) demonstrate the algorithm’s ability to reduce memory demand by an order of magnitude and achieve calculations up to 20x faster. We further showcase its superiority in slender tissues and validate results against experiments performed in live explanted human hearts.
Conclusions:
In this work, we present a compression algorithm that accelerates electrical activity simulations on realistic anatomies by an order of magnitude (up to 20x), thereby allowing the use of finer grid resolutions while conserving GPU memory. Additionally, improved accuracy is achieved through cross-diagonal terms, which are essential for thin tissues, often found in heart structures such as pectinate muscles and trabeculae, as well as Purkinje fibers. Our method enables interactive simulations with even interactive domain boundary manipulation (unlike finite element/volume methods). Finally, agreement with experiments and ease of mesh import into WebGL paves the way for virtual cohorts and digital twins, aiding arrhythmia analysis and personalized therapies.
{"title":"Fast interactive simulations of cardiac electrical activity in anatomically accurate heart structures by compressing sparse uniform cartesian grids","authors":"Abouzar Kaboudian , Richard A. Gray , Ilija Uzelac , Elizabeth M. Cherry , Flavio. H. Fenton","doi":"10.1016/j.cmpb.2024.108456","DOIUrl":"10.1016/j.cmpb.2024.108456","url":null,"abstract":"<div><h3>Background and Objective:</h3><div>Numerical simulations are valuable tools for studying cardiac arrhythmias. Not only do they complement experimental studies, but there is also an increasing expectation for their use in clinical applications to guide patient-specific procedures. However, numerical studies that solve the reaction–diffusion equations describing cardiac electrical activity remain challenging to set up, are time-consuming, and in many cases, are prohibitively computationally expensive for long studies. The computational cost of cardiac simulations of complex models on anatomically accurate structures necessitates parallel computing. Graphics processing units (GPUs), which have thousands of cores, have been introduced as a viable technology for carrying out fast cardiac simulations, sometimes including real-time interactivity. Our main objective is to increase the performance and accuracy of such GPU implementations while conserving computational resources.</div></div><div><h3>Methods:</h3><div>In this work, we present a compression algorithm that can be used to conserve GPU memory and improve efficiency by managing the sparsity that is inherent in using Cartesian grids to represent cardiac structures directly obtained from high-resolution MRI and mCT scans. Furthermore, we present a discretization scheme that includes the cross-diagonal terms in the computational cell to increase numerical accuracy, which is especially important for simulating thin tissue sections without the need for costly mesh refinement.</div></div><div><h3>Results:</h3><div>Interactive WebGL simulations of atrial/ventricular structures (on PCs, laptops, tablets, and phones) demonstrate the algorithm’s ability to reduce memory demand by an order of magnitude and achieve calculations up to 20x faster. We further showcase its superiority in slender tissues and validate results against experiments performed in live explanted human hearts.</div></div><div><h3>Conclusions:</h3><div>In this work, we present a compression algorithm that accelerates electrical activity simulations on realistic anatomies by an order of magnitude (up to 20x), thereby allowing the use of finer grid resolutions while conserving GPU memory. Additionally, improved accuracy is achieved through cross-diagonal terms, which are essential for thin tissues, often found in heart structures such as pectinate muscles and trabeculae, as well as Purkinje fibers. Our method enables interactive simulations with even interactive domain boundary manipulation (unlike finite element/volume methods). Finally, agreement with experiments and ease of mesh import into WebGL paves the way for virtual cohorts and digital twins, aiding arrhythmia analysis and personalized therapies.</div></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":"257 ","pages":"Article 108456"},"PeriodicalIF":4.9,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142544196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-23DOI: 10.1016/j.cmpb.2024.108474
Jiaying Liu , Anna Corti , Valentina D.A. Corino , Luca Mainardi
Background and objective
Low-dose computed tomography (LDCT) screening has shown promise in reducing lung cancer mortality; however, it suffers from high false positive rates and a scarcity of available annotated datasets. To overcome these challenges, we propose a novel approach using synthetic LDCT images generated from standard-dose CT (SDCT) scans from the LIDC-IDRI dataset. Our objective is to develop and validate an interpretable radiomics-based model for distinguishing likely benign from likely malignant pulmonary nodules.
Methods
From a total of 1010 CT images (695 SDCTs and 315 LDCTs), we degraded SDCTs in the sinogram domain and obtained 1950 nodules as the training set. The 675 nodules from the LDCTs were stratified into 50%-50% partitions for validation and testing. Radiomic features were extracted from nodules, and three feature sets were assessed using: a) only shape and size (SS) features, b) all features but SS features, and c) all features. A systematic pipeline was developed to optimize the feature set and evaluate multiple machine learning models. Models were trained using degraded SDCT, validated and tested on the LDCT nodules.
Results
Training a logistic regression model using three SS features yielded the most promising results, achieving on the test set mean balanced accuracy, sensitivity, specificity, and AUC-ROC scores of 0.81, 0.76, 0.85, and 0.87, respectively.
Conclusions
Our study demonstrates the feasibility and effectiveness of using synthetic LDCT images for developing a relatively accurate radiomics-based model in lung nodule classification. This approach addresses challenges associated with LDCT screening, offering potential implications for improving lung cancer detection and reducing false positives.
{"title":"Lung nodule classification using radiomics model trained on degraded SDCT images","authors":"Jiaying Liu , Anna Corti , Valentina D.A. Corino , Luca Mainardi","doi":"10.1016/j.cmpb.2024.108474","DOIUrl":"10.1016/j.cmpb.2024.108474","url":null,"abstract":"<div><h3>Background and objective</h3><div>Low-dose computed tomography (LDCT) screening has shown promise in reducing lung cancer mortality; however, it suffers from high false positive rates and a scarcity of available annotated datasets. To overcome these challenges, we propose a novel approach using synthetic LDCT images generated from standard-dose CT (SDCT) scans from the LIDC-IDRI dataset. Our objective is to develop and validate an interpretable radiomics-based model for distinguishing likely benign from likely malignant pulmonary nodules.</div></div><div><h3>Methods</h3><div>From a total of 1010 CT images (695 SDCTs and 315 LDCTs), we degraded SDCTs in the sinogram domain and obtained 1950 nodules as the training set. The 675 nodules from the LDCTs were stratified into 50%-50% partitions for validation and testing. Radiomic features were extracted from nodules, and three feature sets were assessed using: a) only shape and size (SS) features, b) all features but SS features, and c) all features. A systematic pipeline was developed to optimize the feature set and evaluate multiple machine learning models. Models were trained using degraded SDCT, validated and tested on the LDCT nodules.</div></div><div><h3>Results</h3><div>Training a logistic regression model using three SS features yielded the most promising results, achieving on the test set mean balanced accuracy, sensitivity, specificity, and AUC-ROC scores of 0.81, 0.76, 0.85, and 0.87, respectively.</div></div><div><h3>Conclusions</h3><div>Our study demonstrates the feasibility and effectiveness of using synthetic LDCT images for developing a relatively accurate radiomics-based model in lung nodule classification. This approach addresses challenges associated with LDCT screening, offering potential implications for improving lung cancer detection and reducing false positives.</div></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":"257 ","pages":"Article 108474"},"PeriodicalIF":4.9,"publicationDate":"2024-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142552102","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-22DOI: 10.1016/j.cmpb.2024.108463
Saeid Shakeri, Farshad Almasganj
Background and objective
Background subtraction of X-ray coronary angiograms (XCA) can significantly improve the diagnosis and treatment of coronary vessel diseases. The XCA background is complex and dynamic due to structures with different intensities and independent motion patterns, making XCA background subtraction challenging.
Methods
The current work proposes an online tree-structure-constrained robust PCA (OTS-RPCA) method to subtract the XCA background. A morphological closing operation is used as a pre-processing step to remove large-scale structures like the spine, chest and diaphragm. In the following, the XCA sequence is decomposed into three different subspaces: low-rank background, residual dynamic background and vascular foreground. A tree-structured norm is introduced and applied to the vascular submatrix to guarantee the vessel spatial coherency. Moreover, the residual dynamic background is separately extracted to remove noise and motion artifacts from the vascular foreground. The proposed algorithm also employs an adaptive regularization coefficient that tracks the vessel area changes in the XCA frames.
Results
The proposed method is evaluated on two datasets of real clinical and synthetic low-contrast XCA sequences of 38 patients using the global and local contrast-to-noise ratio (CNR) and structural similarity index (SSIM) criteria. For the real XCA dataset, the average values of global CNR, local CNR and SSIM are 6.27, 3.07 and 0.97, while these values over the synthetic low-contrast dataset are obtained as 5.15, 2.69 and 0.94, respectively. The implemented quantitative and qualitative experiments verify the superiority of the proposed method over seven selected state-of-the-art methods in increasing the coronary vessel contrast and preserving the vessel structure.
Conclusions
The proposed OTS-RPCA background subtraction method accurately subtracts backgrounds from XCA images. Our method might provide the basis for reducing the contrast agent dose and the number of needed injections in coronary interventions.
{"title":"Online tree-structure-constrained RPCA for background subtraction of X-ray coronary angiography images","authors":"Saeid Shakeri, Farshad Almasganj","doi":"10.1016/j.cmpb.2024.108463","DOIUrl":"10.1016/j.cmpb.2024.108463","url":null,"abstract":"<div><h3>Background and objective</h3><div>Background subtraction of X-ray coronary angiograms (XCA) can significantly improve the diagnosis and treatment of coronary vessel diseases. The XCA background is complex and dynamic due to structures with different intensities and independent motion patterns, making XCA background subtraction challenging.</div></div><div><h3>Methods</h3><div>The current work proposes an online tree-structure-constrained robust PCA (OTS-RPCA) method to subtract the XCA background. A morphological closing operation is used as a pre-processing step to remove large-scale structures like the spine, chest and diaphragm. In the following, the XCA sequence is decomposed into three different subspaces: low-rank background, residual dynamic background and vascular foreground. A tree-structured norm is introduced and applied to the vascular submatrix to guarantee the vessel spatial coherency. Moreover, the residual dynamic background is separately extracted to remove noise and motion artifacts from the vascular foreground. The proposed algorithm also employs an adaptive regularization coefficient that tracks the vessel area changes in the XCA frames.</div></div><div><h3>Results</h3><div>The proposed method is evaluated on two datasets of real clinical and synthetic low-contrast XCA sequences of 38 patients using the global and local contrast-to-noise ratio (CNR) and structural similarity index (SSIM) criteria. For the real XCA dataset, the average values of global CNR, local CNR and SSIM are 6.27, 3.07 and 0.97, while these values over the synthetic low-contrast dataset are obtained as 5.15, 2.69 and 0.94, respectively. The implemented quantitative and qualitative experiments verify the superiority of the proposed method over seven selected state-of-the-art methods in increasing the coronary vessel contrast and preserving the vessel structure.</div></div><div><h3>Conclusions</h3><div>The proposed OTS-RPCA background subtraction method accurately subtracts backgrounds from XCA images. Our method might provide the basis for reducing the contrast agent dose and the number of needed injections in coronary interventions.</div></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":"258 ","pages":"Article 108463"},"PeriodicalIF":4.9,"publicationDate":"2024-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142616233","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-19DOI: 10.1016/j.cmpb.2024.108462
Arnab Maity, Goutam Saha
Background and objective:
Phonocardiogram (PCG) signal analysis is a non-invasive and cost-efficient approach for diagnosing cardiovascular diseases. Existing PCG-based approaches employ signal processing and machine learning (ML) for automatic disease detection. However, machine learning techniques are known to underperform in cross-corpora arrangements. A drastic effect on disease detection performance is observed when training and testing sets come from different PCG databases with varying data acquisition settings. This study investigates the impact of data acquisition parameter variations in the PCG data across different databases and develops methods to achieve robustness against these variations.
Methods:
To alleviate the effect of dataset-induced variations, it employs a combination of three strategies: domain-invariant preprocessing, transfer learning, and domain-balanced variable hop fragment selection (DBVHFS). The domain-invariant preprocessing normalizes the PCG to reduce the stethoscope and environment-induced variations. The transfer learning utilizes a pre-trained model trained on diverse audio data to reduce the impact of data variability by generalizing feature representations. DBVHFS facilitates unbiased fine-tuning of the pre-trained model by balancing the training fragments across all domains, ensuring equal distribution from each class.
Results:
The proposed method is evaluated on six independent PhysioNet/CinC Challenge PCG databases using leave-one-dataset-out cross-validation. Results indicate that our system outperforms the existing study with a relative improvement of 5.92% in unweighted average recall and 17.71% in sensitivity.
Conclusions:
The methods proposed in this study address variations in PCG data originating from different sources, potentially enhancing the implementation possibility of automated cardiac screening systems in real-life scenarios.
{"title":"Enhancing cross-domain robustness in phonocardiogram signal classification using domain-invariant preprocessing and transfer learning","authors":"Arnab Maity, Goutam Saha","doi":"10.1016/j.cmpb.2024.108462","DOIUrl":"10.1016/j.cmpb.2024.108462","url":null,"abstract":"<div><h3>Background and objective:</h3><div>Phonocardiogram (PCG) signal analysis is a non-invasive and cost-efficient approach for diagnosing cardiovascular diseases. Existing PCG-based approaches employ signal processing and machine learning (ML) for automatic disease detection. However, machine learning techniques are known to underperform in cross-corpora arrangements. A drastic effect on disease detection performance is observed when training and testing sets come from different PCG databases with varying data acquisition settings. This study investigates the impact of data acquisition parameter variations in the PCG data across different databases and develops methods to achieve robustness against these variations.</div></div><div><h3>Methods:</h3><div>To alleviate the effect of dataset-induced variations, it employs a combination of three strategies: domain-invariant preprocessing, transfer learning, and domain-balanced variable hop fragment selection (DBVHFS). The domain-invariant preprocessing normalizes the PCG to reduce the stethoscope and environment-induced variations. The transfer learning utilizes a pre-trained model trained on diverse audio data to reduce the impact of data variability by generalizing feature representations. DBVHFS facilitates unbiased fine-tuning of the pre-trained model by balancing the training fragments across all domains, ensuring equal distribution from each class.</div></div><div><h3>Results:</h3><div>The proposed method is evaluated on six independent PhysioNet/CinC Challenge <span><math><mrow><mn>2016</mn></mrow></math></span> PCG databases using leave-one-dataset-out cross-validation. Results indicate that our system outperforms the existing study with a relative improvement of <strong>5.92%</strong> in unweighted average recall and <strong>17.71%</strong> in sensitivity.</div></div><div><h3>Conclusions:</h3><div>The methods proposed in this study address variations in PCG data originating from different sources, potentially enhancing the implementation possibility of automated cardiac screening systems in real-life scenarios.</div></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":"257 ","pages":"Article 108462"},"PeriodicalIF":4.9,"publicationDate":"2024-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142567781","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}