Dongliang Zhang, Yuansheng Cao, Qi Ouyang, Yuhai Tu
Synchronization among a group of active agents is ubiquitous in nature. Although synchronization based on direct interactions between agents described by the Kuramoto model is well understood, the other general mechanism based on indirect interactions among agents sharing limited resources are less known. Here, we propose a minimal thermodynamically consistent model for the altruistic resource-sharing (ARS) mechanism wherein resources are needed for individual agent to advance but a more advanced agent has a lower competence to obtain resources. We show that while differential competence in ARS mechanism provides a negative feedback leading to synchronization it also breaks detailed balance and thus requires additional energy dissipation besides the cost of driving individual agents. By solving the model analytically, our study reveals a general tradeoff relation between the total energy dissipation rate and the two key performance measures of the system: average speed and synchronization accuracy. For a fixed dissipation rate, there is a distinct speed-accuracy Pareto front traversed by the scarcity of resources: scarcer resources lead to slower speed but more accurate synchronization. Increasing energy dissipation eases this tradeoff by pushing the speed-accuracy Pareto front outwards. The connections of our work to realistic biological systems such as the KaiABC system in cyanobacterial circadian clock and other theoretical results based on thermodynamic uncertainty relation are also discussed.
{"title":"An altruistic resource-sharing mechanism for synchronization: The energy-speed-accuracy tradeoff.","authors":"Dongliang Zhang, Yuansheng Cao, Qi Ouyang, Yuhai Tu","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Synchronization among a group of active agents is ubiquitous in nature. Although synchronization based on direct interactions between agents described by the Kuramoto model is well understood, the other general mechanism based on indirect interactions among agents sharing limited resources are less known. Here, we propose a minimal thermodynamically consistent model for the altruistic resource-sharing (ARS) mechanism wherein resources are needed for individual agent to advance but a more advanced agent has a lower competence to obtain resources. We show that while differential competence in ARS mechanism provides a negative feedback leading to synchronization it also breaks detailed balance and thus requires additional energy dissipation besides the cost of driving individual agents. By solving the model analytically, our study reveals a general tradeoff relation between the total energy dissipation rate and the two key performance measures of the system: average speed and synchronization accuracy. For a fixed dissipation rate, there is a distinct speed-accuracy Pareto front traversed by the scarcity of resources: scarcer resources lead to slower speed but more accurate synchronization. Increasing energy dissipation eases this tradeoff by pushing the speed-accuracy Pareto front outwards. The connections of our work to realistic biological systems such as the KaiABC system in cyanobacterial circadian clock and other theoretical results based on thermodynamic uncertainty relation are also discussed.</p>","PeriodicalId":93888,"journal":{"name":"ArXiv","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11838778/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143461087","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Matt Y Cheung, Sophia Zorek, Tucker J Netherton, Laurence E Court, Sadeer Al-Kindi, Ashok Veeraraghavan, Guha Balakrishnan
Diffusion models demonstrate state-of-the-art performance on image generation, and are gaining traction for sparse medical image reconstruction tasks. However, compared to classical reconstruction algorithms relying on simple analytical priors, diffusion models have the dangerous property of producing realistic looking results even when incorrect, particularly with few observations. We investigate the utility of diffusion models as priors for image reconstruction by varying the number of observations and comparing their performance to classical priors (sparse and Tikhonov regularization) using pixel-based, structural, and downstream metrics. We make comparisons on low-dose chest wall computed tomography (CT) for fat mass quantification. First, we find that classical priors are superior to diffusion priors when the number of projections is "sufficient". Second, we find that diffusion priors can capture a large amount of detail with very few observations, significantly outperforming classical priors. However, they fall short of capturing all details, even with many observations. Finally, we find that the performance of diffusion priors plateau after extremely few (≈10-15) projections. Ultimately, our work highlights potential issues with diffusion-based sparse reconstruction and underscores the importance of further investigation, particularly in high-stakes clinical settings.
{"title":"When are Diffusion Priors Helpful in Sparse Reconstruction? A Study with Sparse-view CT.","authors":"Matt Y Cheung, Sophia Zorek, Tucker J Netherton, Laurence E Court, Sadeer Al-Kindi, Ashok Veeraraghavan, Guha Balakrishnan","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Diffusion models demonstrate state-of-the-art performance on image generation, and are gaining traction for sparse medical image reconstruction tasks. However, compared to classical reconstruction algorithms relying on simple analytical priors, diffusion models have the dangerous property of producing realistic looking results <i>even when incorrect</i>, particularly with few observations. We investigate the utility of diffusion models as priors for image reconstruction by varying the number of observations and comparing their performance to classical priors (sparse and Tikhonov regularization) using pixel-based, structural, and downstream metrics. We make comparisons on low-dose chest wall computed tomography (CT) for fat mass quantification. First, we find that classical priors are superior to diffusion priors when the number of projections is \"sufficient\". Second, we find that diffusion priors can capture a large amount of detail with very few observations, significantly outperforming classical priors. However, they fall short of capturing all details, even with many observations. Finally, we find that the performance of diffusion priors plateau after extremely few (≈10-15) projections. Ultimately, our work highlights potential issues with diffusion-based sparse reconstruction and underscores the importance of further investigation, particularly in high-stakes clinical settings.</p>","PeriodicalId":93888,"journal":{"name":"ArXiv","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11838784/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143461062","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
textbf{Objective:} Alzheimer's disease (AD) is the most prevalent form of dementia worldwide, encompassing a prodromal stage known as Mild Cognitive Impairment (MCI), where patients may either progress to AD or remain stable. The objective of the work was to capture structural and functional modulations of brain structure and function relying on multimodal MRI data and Single Nucleotide Polymorphisms, also in case of missing views, with the twofold goal of classifying AD patients versus healthy controls and detecting MCI converters. % in two distinct tasks, dealing with also missing data. textbf{Approach:} We propose a multimodal DL-based classification framework where a generative module employing Cycle Generative Adversarial Networks was introduced in the latent space for imputing missing data (a common issue of multimodal approaches). Explainable AI method was then used to extract input features' relevance allowing for post-hoc validation and enhancing the interpretability of the learned representations. textbf{Main results:} Experimental results on two tasks, AD detection and MCI conversion, showed that our framework reached competitive performance in the state-of-the-art with an accuracy of $0.926pm0.02$ and $0.711pm0.01$ in the two tasks, respectively. The interpretability analysis revealed gray matter modulations in cortical and subcortical brain areas typically associated with AD. Moreover, impairments in sensory-motor and visual resting state networks along the disease continuum, as well as genetic mutations defining biological processes linked to endocytosis, amyloid-beta, and cholesterol, were identified. textbf{Significance:} Our integrative and interpretable DL approach shows promising performance for AD detection and MCI prediction while shedding light on important biological insights.
{"title":"An interpretable generative multimodal neuroimaging-genomics framework for decoding Alzheimer's disease.","authors":"Giorgio Dolci, Federica Cruciani, Md Abdur Rahaman, Anees Abrol, Jiayu Chen, Zening Fu, Ilaria Boscolo Galazzo, Gloria Menegaz, Vince D Calhoun","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>textbf{Objective:} Alzheimer's disease (AD) is the most prevalent form of dementia worldwide, encompassing a prodromal stage known as Mild Cognitive Impairment (MCI), where patients may either progress to AD or remain stable. The objective of the work was to capture structural and functional modulations of brain structure and function relying on multimodal MRI data and Single Nucleotide Polymorphisms, also in case of missing views, with the twofold goal of classifying AD patients versus healthy controls and detecting MCI converters. % in two distinct tasks, dealing with also missing data. textbf{Approach:} We propose a multimodal DL-based classification framework where a generative module employing Cycle Generative Adversarial Networks was introduced in the latent space for imputing missing data (a common issue of multimodal approaches). Explainable AI method was then used to extract input features' relevance allowing for post-hoc validation and enhancing the interpretability of the learned representations. textbf{Main results:} Experimental results on two tasks, AD detection and MCI conversion, showed that our framework reached competitive performance in the state-of-the-art with an accuracy of $0.926pm0.02$ and $0.711pm0.01$ in the two tasks, respectively. The interpretability analysis revealed gray matter modulations in cortical and subcortical brain areas typically associated with AD. Moreover, impairments in sensory-motor and visual resting state networks along the disease continuum, as well as genetic mutations defining biological processes linked to endocytosis, amyloid-beta, and cholesterol, were identified. textbf{Significance:} Our integrative and interpretable DL approach shows promising performance for AD detection and MCI prediction while shedding light on important biological insights.</p>","PeriodicalId":93888,"journal":{"name":"ArXiv","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11213156/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141473378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
B A Richards, N Ristoff, J Smits, A Jeronimo Perez, I Fescenko, M D Aiello, F Hubert, Y Silani, N Mosavian, M Saleh Ziabari, A Berzins, J T Damron, P Kehayias, D Egbebunmi, J E Shield, D L Huber, A M Mounce, M P Lilly, T Karaulanov, A Jarmola, A Laraoui, V M Acosta
Superparamagnetic iron-oxide nanoparticles (SPIONs) are promising probes for biomedical imaging, but the heterogeneity of their magnetic properties is difficult to characterize with existing methods. Here, we perform widefield imaging of the stray magnetic fields produced by hundreds of isolated ~30-nm SPIONs using a magnetic microscope based on nitrogen-vacancy centers in diamond. By analyzing the SPION magnetic field patterns as a function of applied magnetic field, we observe substantial field-dependent transverse magnetization components that are typically obscured with ensemble characterization methods. We find negligible hysteresis in each of the three magnetization components for nearly all SPIONs in our sample. Most SPIONs exhibit a sharp Langevin saturation curve, enumerated by a characteristic polarizing applied field, B_c. The B_c distribution is highly asymmetric, with a standard deviation (1.4 mT) that is larger than the median (0.6 mT). Using time-resolved magnetic microscopy, we directly record SPION N'eel relaxation, after switching off a 31 mT applied field, with a temporal resolution of ~60 ms that is limited by the ring-down time of the electromagnet coils. For small bias fields B_{hold}=1.5-3.5 mT, we observe a broad range of SPION N'eel relaxation times--from milliseconds to seconds--that are consistent with an exponential dependence on B_{hold}. Our time-resolved diamond magnetic microscopy study reveals rich SPION sample heterogeneity and may be extended to other fundamental studies of nanomagnetism.
{"title":"Time-resolved diamond magnetic microscopy of superparamagnetic iron-oxide nanoparticles.","authors":"B A Richards, N Ristoff, J Smits, A Jeronimo Perez, I Fescenko, M D Aiello, F Hubert, Y Silani, N Mosavian, M Saleh Ziabari, A Berzins, J T Damron, P Kehayias, D Egbebunmi, J E Shield, D L Huber, A M Mounce, M P Lilly, T Karaulanov, A Jarmola, A Laraoui, V M Acosta","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Superparamagnetic iron-oxide nanoparticles (SPIONs) are promising probes for biomedical imaging, but the heterogeneity of their magnetic properties is difficult to characterize with existing methods. Here, we perform widefield imaging of the stray magnetic fields produced by hundreds of isolated ~30-nm SPIONs using a magnetic microscope based on nitrogen-vacancy centers in diamond. By analyzing the SPION magnetic field patterns as a function of applied magnetic field, we observe substantial field-dependent transverse magnetization components that are typically obscured with ensemble characterization methods. We find negligible hysteresis in each of the three magnetization components for nearly all SPIONs in our sample. Most SPIONs exhibit a sharp Langevin saturation curve, enumerated by a characteristic polarizing applied field, B_c. The B_c distribution is highly asymmetric, with a standard deviation (1.4 mT) that is larger than the median (0.6 mT). Using time-resolved magnetic microscopy, we directly record SPION N'eel relaxation, after switching off a 31 mT applied field, with a temporal resolution of ~60 ms that is limited by the ring-down time of the electromagnet coils. For small bias fields B_{hold}=1.5-3.5 mT, we observe a broad range of SPION N'eel relaxation times--from milliseconds to seconds--that are consistent with an exponential dependence on B_{hold}. Our time-resolved diamond magnetic microscopy study reveals rich SPION sample heterogeneity and may be extended to other fundamental studies of nanomagnetism.</p>","PeriodicalId":93888,"journal":{"name":"ArXiv","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11601802/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142741633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mojtaba Safari, Zach Eidex, Chih-Wei Chang, Richard L J Qiu, Xiaofeng Yang
Magnetic resonance imaging (MRI) is a non-invasive imaging modality and provides comprehensive anatomical and functional insights into the human body. However, its long acquisition times can lead to patient discomfort, motion artifacts, and limiting real-time applications. To address these challenges, strategies such as parallel imaging have been applied, which utilize multiple receiver coils to speed up the data acquisition process. Additionally, compressed sensing (CS) is a method that facilitates image reconstruction from sparse data, significantly reducing image acquisition time by minimizing the amount of data collection needed. Recently, deep learning (DL) has emerged as a powerful tool for improving MRI reconstruction. It has been integrated with parallel imaging and CS principles to achieve faster and more accurate MRI reconstructions. This review comprehensively examines DL-based techniques for MRI reconstruction. We categorize and discuss various DL-based methods, including end-to-end approaches, unrolled optimization, and federated learning, highlighting their potential benefits. Our systematic review highlights significant contributions and underscores the potential of DL in MRI reconstruction. Additionally, we summarize key results and trends in DL-based MRI reconstruction, including quantitative metrics, the dataset, acceleration factors, and the progress of and research interest in DL techniques over time. Finally, we discuss potential future directions and the importance of DL-based MRI reconstruction in advancing medical imaging. To facilitate further research in this area, we provide a GitHub repository that includes up-to-date DL-based MRI reconstruction publications and public datasets-https://github.com/mosaf/Awesome-DL-based-CS-MRI.
{"title":"Advancing MRI Reconstruction: A Systematic Review of Deep Learning and Compressed Sensing Integration.","authors":"Mojtaba Safari, Zach Eidex, Chih-Wei Chang, Richard L J Qiu, Xiaofeng Yang","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Magnetic resonance imaging (MRI) is a non-invasive imaging modality and provides comprehensive anatomical and functional insights into the human body. However, its long acquisition times can lead to patient discomfort, motion artifacts, and limiting real-time applications. To address these challenges, strategies such as parallel imaging have been applied, which utilize multiple receiver coils to speed up the data acquisition process. Additionally, compressed sensing (CS) is a method that facilitates image reconstruction from sparse data, significantly reducing image acquisition time by minimizing the amount of data collection needed. Recently, deep learning (DL) has emerged as a powerful tool for improving MRI reconstruction. It has been integrated with parallel imaging and CS principles to achieve faster and more accurate MRI reconstructions. This review comprehensively examines DL-based techniques for MRI reconstruction. We categorize and discuss various DL-based methods, including end-to-end approaches, unrolled optimization, and federated learning, highlighting their potential benefits. Our systematic review highlights significant contributions and underscores the potential of DL in MRI reconstruction. Additionally, we summarize key results and trends in DL-based MRI reconstruction, including quantitative metrics, the dataset, acceleration factors, and the progress of and research interest in DL techniques over time. Finally, we discuss potential future directions and the importance of DL-based MRI reconstruction in advancing medical imaging. To facilitate further research in this area, we provide a GitHub repository that includes up-to-date DL-based MRI reconstruction publications and public datasets-https://github.com/mosaf/Awesome-DL-based-CS-MRI.</p>","PeriodicalId":93888,"journal":{"name":"ArXiv","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11838702/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143461073","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yu Xi Huang, Simon Mahler, Maya Dickson, Aidin Abedi, Yu Tung Lo, Patrick D Lyden, Jonathan Russin, Charles Liu, Changhuei Yang
Cerebral blood flow is a critical metric for cerebrovascular monitoring, with applications in stroke detection, brain injury evaluation, aging, and neurological disorders. Non-invasively measuring cerebral blood dynamics is challenging due to the scalp and skull, which obstruct direct brain access and contain their own blood dynamics that must be isolated. We developed an aggregated seven-channel speckle contrast optical spectroscopy system to measure blood flow and blood volume non-invasively. Each channel, with distinct source-to-detector distance, targeted different depths to detect scalp and brain blood dynamics separately. By briefly occluding the superficial temporal artery, which supplies blood only to the scalp, we isolated surface blood dynamics from brain signals. Results on 20 subjects show that scalp-sensitive channels experienced significant reductions in blood dynamics during occlusion, while brain-sensitive channels experienced minimal changes. This provides experimental evidence of brain-to-scalp sensitivity in optical measurements, highlighting optimal configuration for preferentially probing brain signals non-invasively.
{"title":"Assessing Sensitivity of Brain-to-Scalp Blood Flows in Laser Speckle Imaging by Occluding the Superficial Temporal Artery.","authors":"Yu Xi Huang, Simon Mahler, Maya Dickson, Aidin Abedi, Yu Tung Lo, Patrick D Lyden, Jonathan Russin, Charles Liu, Changhuei Yang","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Cerebral blood flow is a critical metric for cerebrovascular monitoring, with applications in stroke detection, brain injury evaluation, aging, and neurological disorders. Non-invasively measuring cerebral blood dynamics is challenging due to the scalp and skull, which obstruct direct brain access and contain their own blood dynamics that must be isolated. We developed an aggregated seven-channel speckle contrast optical spectroscopy system to measure blood flow and blood volume non-invasively. Each channel, with distinct source-to-detector distance, targeted different depths to detect scalp and brain blood dynamics separately. By briefly occluding the superficial temporal artery, which supplies blood only to the scalp, we isolated surface blood dynamics from brain signals. Results on 20 subjects show that scalp-sensitive channels experienced significant reductions in blood dynamics during occlusion, while brain-sensitive channels experienced minimal changes. This provides experimental evidence of brain-to-scalp sensitivity in optical measurements, highlighting optimal configuration for preferentially probing brain signals non-invasively.</p>","PeriodicalId":93888,"journal":{"name":"ArXiv","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11838768/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143461112","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
During development and under normal physiological conditions, biological tissues are continuously subjected to substantial mechanical stresses. In response to large deformations cells in a tissue must undergo multicellular rearrangements in order to maintain integrity and robustness. However, how these events are connected in time and space remains unknown. Here, using computational and theoretical modeling, we studied the mechanical plasticity of epithelial monolayers under large deformations. Our results demonstrate that the jamming-unjamming (solid-fluid) transition in tissues can vary significantly depending on the degree of deformation, implying that tissues are highly unconventional materials. Using analytical modeling, we elucidate the origins of this behavior. We also demonstrate how a tissue accommodates large deformations through a collective series of rearrangements, which behave similarly to avalanches in non-living materials. We find that these tissue avalanches are governed by stress redistribution and the spatial distribution of vulnerable spots. Finally, we propose a simple and experimentally accessible framework to predict avalanches and infer tissue mechanical stress based on static images.
{"title":"Origin of yield stress and mechanical plasticity in model biological tissues.","authors":"Anh Q Nguyen, Junxiang Huang, Dapeng Bi","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>During development and under normal physiological conditions, biological tissues are continuously subjected to substantial mechanical stresses. In response to large deformations cells in a tissue must undergo multicellular rearrangements in order to maintain integrity and robustness. However, how these events are connected in time and space remains unknown. Here, using computational and theoretical modeling, we studied the mechanical plasticity of epithelial monolayers under large deformations. Our results demonstrate that the jamming-unjamming (solid-fluid) transition in tissues can vary significantly depending on the degree of deformation, implying that tissues are highly unconventional materials. Using analytical modeling, we elucidate the origins of this behavior. We also demonstrate how a tissue accommodates large deformations through a collective series of rearrangements, which behave similarly to avalanches in non-living materials. We find that these tissue avalanches are governed by stress redistribution and the spatial distribution of vulnerable spots. Finally, we propose a simple and experimentally accessible framework to predict avalanches and infer tissue mechanical stress based on static images.</p>","PeriodicalId":93888,"journal":{"name":"ArXiv","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11398538/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142303258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Keyur D Shah, Chih-Wei Chang, Sibo Tian, Pretesh Patel, Richard Qiu, Justin Roper, Jun Zhou, Zhen Tian, Xiaofeng Yang
Purpose: Stereotactic arrhythmia radioablation (STAR) has emerged as a promising non-invasive treatment for refractory ventricular tachycardia (VT), offering a novel alternative for patients who are poor candidates for catheter ablation. This systematic review and meta-analysis evaluates the safety, efficacy, and technical aspects of STAR across preclinical studies, case reports, case series, and clinical trials.
Methods and materials: A systematic review identified 80 studies published between 2015 and 2024, including 12 preclinical studies, 47 case reports, 15 case series, and 6 clinical trials. Data on patient demographics, treatment parameters, and clinical outcomes were extracted. Meta-analyses were performed for pooled mortality rates, VT burden reduction, and acute toxicities, with subgroup analyses exploring cardiomyopathy type, age, left ventricular ejection fraction (LVEF), and treatment modality.
Results: The pooled 6- and 12-month mortality rates were 16% (95% CI: 11-21%) and 32% (95% CI: 26-39%), respectively. VT burden reduction at 6 months was 75% (95% CI: 73-77%), with significant heterogeneity (I2 = 98.8%). Grade 3+ acute toxicities were observed in 7% (95% CI: 4-11%), with pneumonitis being the most common. Subgroup analyses showed comparable outcomes between LINAC- and CyberKnife-based treatments, with minor differences based on patient characteristics and cardiomyopathy type.
Conclusions: STAR demonstrates significant potential in reducing VT burden and improving patient outcomes. While favorable acute safety profiles and efficacy support clinical adoption, variability in treatment protocols underscores the need for standardized practices. Future studies should aim to optimize patient selection, establish robust dosimetric standards, and evaluate long-term safety.
{"title":"Evaluating the Efficacy and Safety of Stereotactic Arrhythmia Radioablation in Ventricular Tachycardia: A Comprehensive Systematic Review and Meta-Analysis.","authors":"Keyur D Shah, Chih-Wei Chang, Sibo Tian, Pretesh Patel, Richard Qiu, Justin Roper, Jun Zhou, Zhen Tian, Xiaofeng Yang","doi":"","DOIUrl":"","url":null,"abstract":"<p><strong>Purpose: </strong>Stereotactic arrhythmia radioablation (STAR) has emerged as a promising non-invasive treatment for refractory ventricular tachycardia (VT), offering a novel alternative for patients who are poor candidates for catheter ablation. This systematic review and meta-analysis evaluates the safety, efficacy, and technical aspects of STAR across preclinical studies, case reports, case series, and clinical trials.</p><p><strong>Methods and materials: </strong>A systematic review identified 80 studies published between 2015 and 2024, including 12 preclinical studies, 47 case reports, 15 case series, and 6 clinical trials. Data on patient demographics, treatment parameters, and clinical outcomes were extracted. Meta-analyses were performed for pooled mortality rates, VT burden reduction, and acute toxicities, with subgroup analyses exploring cardiomyopathy type, age, left ventricular ejection fraction (LVEF), and treatment modality.</p><p><strong>Results: </strong>The pooled 6- and 12-month mortality rates were 16% (95% CI: 11-21%) and 32% (95% CI: 26-39%), respectively. VT burden reduction at 6 months was 75% (95% CI: 73-77%), with significant heterogeneity (I<sup>2</sup> = 98.8%). Grade 3+ acute toxicities were observed in 7% (95% CI: 4-11%), with pneumonitis being the most common. Subgroup analyses showed comparable outcomes between LINAC- and CyberKnife-based treatments, with minor differences based on patient characteristics and cardiomyopathy type.</p><p><strong>Conclusions: </strong>STAR demonstrates significant potential in reducing VT burden and improving patient outcomes. While favorable acute safety profiles and efficacy support clinical adoption, variability in treatment protocols underscores the need for standardized practices. Future studies should aim to optimize patient selection, establish robust dosimetric standards, and evaluate long-term safety.</p>","PeriodicalId":93888,"journal":{"name":"ArXiv","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11838787/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143461145","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gutama Ibrahim Mohammad, Johan Lm Björkegren, Tom Michoel
Motivation: Over the last decade, genome-wide association studies (GWAS) have successfully identified numerous genetic variants associated with complex diseases. These associations have the potential to reveal the molecular mechanisms underlying complex diseases and lead to the identification of novel drug targets. Despite these advancements, the biological pathways and mechanisms linking genetic variants to complex diseases are still not fully understood. Most trait-associated variants reside in non-coding regions and are presumed to influence phenotypes through regulatory effects on gene expression. Yet, it is often unclear which genes they regulate and in which cell types this regulation occurs. Transcriptome-wide association studies (TWAS) aim to bridge this gap by detecting trait-associated tissue gene expression regulated by GWAS variants. However, traditional TWAS approaches frequently overlook the critical contributions of trans-regulatory effects and fail to integrate comprehensive regulatory networks. Here, we present a novel framework that leverages tissue-specific gene regulatory networks (GRNs) to integrate cis- and trans-genetic regulatory effects into the TWAS framework for complex diseases.
Results: We validate our approach using coronary artery disease (CAD), utilizing data from the STARNET project, which provides multi-tissue gene expression and genetic data from around 600 living patients with cardiovascular disease. Preliminary results demonstrate the potential of our GRN-driven framework to uncover more genes and pathways that may underlie CAD. This framework extends traditional TWAS methodologies by utilizing tissue-specific regulatory insights and advancing the understanding of complex disease genetic architecture.
{"title":"A Network-Driven Framework for Enhancing Gene-Disease Association Studies in Coronary Artery Disease.","authors":"Gutama Ibrahim Mohammad, Johan Lm Björkegren, Tom Michoel","doi":"","DOIUrl":"","url":null,"abstract":"<p><strong>Motivation: </strong>Over the last decade, genome-wide association studies (GWAS) have successfully identified numerous genetic variants associated with complex diseases. These associations have the potential to reveal the molecular mechanisms underlying complex diseases and lead to the identification of novel drug targets. Despite these advancements, the biological pathways and mechanisms linking genetic variants to complex diseases are still not fully understood. Most trait-associated variants reside in non-coding regions and are presumed to influence phenotypes through regulatory effects on gene expression. Yet, it is often unclear which genes they regulate and in which cell types this regulation occurs. Transcriptome-wide association studies (TWAS) aim to bridge this gap by detecting trait-associated tissue gene expression regulated by GWAS variants. However, traditional TWAS approaches frequently overlook the critical contributions of trans-regulatory effects and fail to integrate comprehensive regulatory networks. Here, we present a novel framework that leverages tissue-specific gene regulatory networks (GRNs) to integrate cis- and trans-genetic regulatory effects into the TWAS framework for complex diseases.</p><p><strong>Results: </strong>We validate our approach using coronary artery disease (CAD), utilizing data from the STARNET project, which provides multi-tissue gene expression and genetic data from around 600 living patients with cardiovascular disease. Preliminary results demonstrate the potential of our GRN-driven framework to uncover more genes and pathways that may underlie CAD. This framework extends traditional TWAS methodologies by utilizing tissue-specific regulatory insights and advancing the understanding of complex disease genetic architecture.</p><p><strong>Availability: </strong>https://github.com/guutama/GRN-TWAS.</p>","PeriodicalId":93888,"journal":{"name":"ArXiv","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11838773/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143461066","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Large language models (LLMs) are a class of artificial intelligence models based on deep learning, which have great performance in various tasks, especially in natural language processing (NLP). Large language models typically consist of artificial neural networks with numerous parameters, trained on large amounts of unlabeled input using self-supervised or semi-supervised learning. However, their potential for solving bioinformatics problems may even exceed their proficiency in modeling human language. In this review, we will provide a comprehensive overview of the essential components of large language models (LLMs) in bioinformatics, spanning genomics, transcriptomics, proteomics, drug discovery, and single-cell analysis. Key aspects covered include tokenization methods for diverse data types, the architecture of transformer models, the core attention mechanism, and the pre-training processes underlying these models. Additionally, we will introduce currently available foundation models and highlight their downstream applications across various bioinformatics domains. Finally, drawing from our experience, we will offer practical guidance for both LLM users and developers, emphasizing strategies to optimize their use and foster further innovation in the field.
{"title":"Advancing bioinformatics with large language models: components, applications and perspectives.","authors":"Jiajia Liu, Mengyuan Yang, Yankai Yu, Haixia Xu, Tiangang Wang, Kang Li, Xiaobo Zhou","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Large language models (LLMs) are a class of artificial intelligence models based on deep learning, which have great performance in various tasks, especially in natural language processing (NLP). Large language models typically consist of artificial neural networks with numerous parameters, trained on large amounts of unlabeled input using self-supervised or semi-supervised learning. However, their potential for solving bioinformatics problems may even exceed their proficiency in modeling human language. In this review, we will provide a comprehensive overview of the essential components of large language models (LLMs) in bioinformatics, spanning genomics, transcriptomics, proteomics, drug discovery, and single-cell analysis. Key aspects covered include tokenization methods for diverse data types, the architecture of transformer models, the core attention mechanism, and the pre-training processes underlying these models. Additionally, we will introduce currently available foundation models and highlight their downstream applications across various bioinformatics domains. Finally, drawing from our experience, we will offer practical guidance for both LLM users and developers, emphasizing strategies to optimize their use and foster further innovation in the field.</p>","PeriodicalId":93888,"journal":{"name":"ArXiv","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10802675/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139521900","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}