Protein flexibility, measured by the B-factor or Debye-Waller factor, is essential for protein functions such as structural support, enzyme activity, cellular communication, and molecular transport. Theoretical analysis and prediction of protein flexibility are crucial for protein design, engineering, and drug discovery. In this work, we introduce the persistent sheaf Laplacian (PSL), an effective tool in topological data analysis, to model and analyze protein flexibility. By representing the local topology and geometry of protein atoms through the multiscale harmonic and non-harmonic spectra of PSLs, the proposed model effectively captures protein flexibility and provides accurate, robust predictions of protein B-factors. Our PSL model demonstrates an increase in accuracy of 32% compared to the classical Gaussian network model (GNM) in predicting B-factors for a dataset of 364 proteins. Additionally, we construct a blind machine learning prediction method utilizing global and local protein features. Extensive computations and comparisons validate the effectiveness of the proposed PSL model for B-factor predictions.
{"title":"Persistent Sheaf Laplacian Analysis of Protein Flexibility.","authors":"Nicole Hayes, Xiaoqi Wei, Hongsong Feng, Ekaterina Merkurjev, Guo-Wei Wei","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Protein flexibility, measured by the B-factor or Debye-Waller factor, is essential for protein functions such as structural support, enzyme activity, cellular communication, and molecular transport. Theoretical analysis and prediction of protein flexibility are crucial for protein design, engineering, and drug discovery. In this work, we introduce the persistent sheaf Laplacian (PSL), an effective tool in topological data analysis, to model and analyze protein flexibility. By representing the local topology and geometry of protein atoms through the multiscale harmonic and non-harmonic spectra of PSLs, the proposed model effectively captures protein flexibility and provides accurate, robust predictions of protein B-factors. Our PSL model demonstrates an increase in accuracy of 32% compared to the classical Gaussian network model (GNM) in predicting B-factors for a dataset of 364 proteins. Additionally, we construct a blind machine learning prediction method utilizing global and local protein features. Extensive computations and comparisons validate the effectiveness of the proposed PSL model for B-factor predictions.</p>","PeriodicalId":93888,"journal":{"name":"ArXiv","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11844605/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143485024","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Oufan Zhang, Zi Hao Liu, Julie D Forman-Kay, Teresa Head-Gordon
Although machine learning has transformed protein structure prediction of folded protein ground states with remarkable accuracy, intrinsically disordered proteins and regions (IDPs/IDRs) are defined by diverse and dynamical structural ensembles that are predicted with low confidence by algorithms such as AlphaFold. We present a new machine learning method, IDPForge (Intrinsically Disordered Protein, FOlded and disordered Region GEnerator), that exploits a transformer protein language diffusion model to create all-atom IDP ensembles and IDR disordered ensembles that maintains the folded domains. IDPForge does not require sequence-specific training, back transformations from coarse-grained representations, nor ensemble reweighting, as in general the created IDP/IDR conformational ensembles show good agreement with solution experimental data, and options for biasing with experimental restraints are provided if desired. We envision that IDPForge with these diverse capabilities will facilitate integrative and structural studies for proteins that contain intrinsic disorder.
{"title":"Deep Learning of Proteins with Local and Global Regions of Disorder.","authors":"Oufan Zhang, Zi Hao Liu, Julie D Forman-Kay, Teresa Head-Gordon","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Although machine learning has transformed protein structure prediction of folded protein ground states with remarkable accuracy, intrinsically disordered proteins and regions (IDPs/IDRs) are defined by diverse and dynamical structural ensembles that are predicted with low confidence by algorithms such as AlphaFold. We present a new machine learning method, IDPForge (Intrinsically Disordered Protein, FOlded and disordered Region GEnerator), that exploits a transformer protein language diffusion model to create all-atom IDP ensembles and IDR disordered ensembles that maintains the folded domains. IDPForge does not require sequence-specific training, back transformations from coarse-grained representations, nor ensemble reweighting, as in general the created IDP/IDR conformational ensembles show good agreement with solution experimental data, and options for biasing with experimental restraints are provided if desired. We envision that IDPForge with these diverse capabilities will facilitate integrative and structural studies for proteins that contain intrinsic disorder.</p>","PeriodicalId":93888,"journal":{"name":"ArXiv","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11875298/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143544216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Single-cell proteomics (SCP) is transforming our understanding of biological complexity by shifting from bulk proteomics, where signals are averaged over thousands of cells, to the proteome analysis of individual cells. This granular perspective reveals distinct cell states, population heterogeneity, and the underpinnings of disease pathogenesis that bulk approaches may obscure. However, SCP demands exceptional sensitivity, precise cell handling, and robust data processing to overcome the inherent challenges of analyzing picogram-level protein samples without amplification. Recent innovations in sample preparation, separations, data acquisition strategies, and specialized mass spectrometry instrumentation have substantially improved proteome coverage and throughput. Approaches that integrate complementary omics, streamline multi-step sample processing, and automate workflows through microfluidics and specialized platforms promise to further push SCP boundaries. Advances in computational methods, especially for data normalization and imputation, address the pervasive issue of missing values, enabling more reliable downstream biological interpretations. Despite these strides, higher throughput, reproducibility, and consensus best practices remain pressing needs in the field. This mini review summarizes the latest progress in SCP technology and software solutions, highlighting how closer integration of analytical, computational, and experimental strategies will facilitate deeper and broader coverage of single-cell proteomes.
{"title":"Single-Cell Proteomics Using Mass Spectrometry.","authors":"Amanda Momenzadeh, Jesse G Meyer","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Single-cell proteomics (SCP) is transforming our understanding of biological complexity by shifting from bulk proteomics, where signals are averaged over thousands of cells, to the proteome analysis of individual cells. This granular perspective reveals distinct cell states, population heterogeneity, and the underpinnings of disease pathogenesis that bulk approaches may obscure. However, SCP demands exceptional sensitivity, precise cell handling, and robust data processing to overcome the inherent challenges of analyzing picogram-level protein samples without amplification. Recent innovations in sample preparation, separations, data acquisition strategies, and specialized mass spectrometry instrumentation have substantially improved proteome coverage and throughput. Approaches that integrate complementary omics, streamline multi-step sample processing, and automate workflows through microfluidics and specialized platforms promise to further push SCP boundaries. Advances in computational methods, especially for data normalization and imputation, address the pervasive issue of missing values, enabling more reliable downstream biological interpretations. Despite these strides, higher throughput, reproducibility, and consensus best practices remain pressing needs in the field. This mini review summarizes the latest progress in SCP technology and software solutions, highlighting how closer integration of analytical, computational, and experimental strategies will facilitate deeper and broader coverage of single-cell proteomes.</p>","PeriodicalId":93888,"journal":{"name":"ArXiv","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11875278/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143545153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Time-resolved CBCT imaging, which reconstructs a dynamic sequence of CBCTs reflecting intra-scan motion (one CBCT per x-ray projection without phase sorting or binning), is highly desired for regular and irregular motion characterization, patient setup, and motion-adapted radiotherapy. Representing patient anatomy and associated motion fields as 3D Gaussians, we developed a Gaussian representation-based framework (PMF-STGR) for fast and accurate dynamic CBCT reconstruction. PMF-STGR comprises three major components: a dense set of 3D Gaussians to reconstruct a reference-frame CBCT for the dynamic sequence; another 3D Gaussian set to capture three-level, coarse-to-fine motion-basis-components (MBCs) to model the intra-scan motion; and a CNN-based motion encoder to solve projection-specific temporal coefficients for the MBCs. Scaled by the temporal coefficients, the learned MBCs will combine into deformation vector fields to deform the reference CBCT into projection-specific, time-resolved CBCTs to capture the dynamic motion. Due to the strong representation power of 3D Gaussians, PMF-STGR can reconstruct dynamic CBCTs in a 'one-shot' training fashion from a standard 3D CBCT scan, without using any prior anatomical or motion model. We evaluated PMF-STGR using XCAT phantom simulations and real patient scans. Metrics including the image relative error, structural-similarity-index-measure, tumor center-of-mass-error, and landmark localization error were used to evaluate the accuracy of solved dynamic CBCTs and motion. PMF-STGR shows clear advantages over a state-of-the-art, INR-based approach, PMF-STINR. Compared with PMF-STINR, PMF-STGR reduces reconstruction time by 50% while reconstructing less blurred images with better motion accuracy. With improved efficiency and accuracy, PMF-STGR enhances the applicability of dynamic CBCT imaging for potential clinical translation.
{"title":"Time-resolved dynamic CBCT reconstruction using prior-model-free spatiotemporal Gaussian representation (PMF-STGR).","authors":"Jiacheng Xie, Hua-Chieh Shao, You Zhang","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Time-resolved CBCT imaging, which reconstructs a dynamic sequence of CBCTs reflecting intra-scan motion (one CBCT per x-ray projection without phase sorting or binning), is highly desired for regular and irregular motion characterization, patient setup, and motion-adapted radiotherapy. Representing patient anatomy and associated motion fields as 3D Gaussians, we developed a Gaussian representation-based framework (PMF-STGR) for fast and accurate dynamic CBCT reconstruction. PMF-STGR comprises three major components: a dense set of 3D Gaussians to reconstruct a reference-frame CBCT for the dynamic sequence; another 3D Gaussian set to capture three-level, coarse-to-fine motion-basis-components (MBCs) to model the intra-scan motion; and a CNN-based motion encoder to solve projection-specific temporal coefficients for the MBCs. Scaled by the temporal coefficients, the learned MBCs will combine into deformation vector fields to deform the reference CBCT into projection-specific, time-resolved CBCTs to capture the dynamic motion. Due to the strong representation power of 3D Gaussians, PMF-STGR can reconstruct dynamic CBCTs in a 'one-shot' training fashion from a standard 3D CBCT scan, without using any prior anatomical or motion model. We evaluated PMF-STGR using XCAT phantom simulations and real patient scans. Metrics including the image relative error, structural-similarity-index-measure, tumor center-of-mass-error, and landmark localization error were used to evaluate the accuracy of solved dynamic CBCTs and motion. PMF-STGR shows clear advantages over a state-of-the-art, INR-based approach, PMF-STINR. Compared with PMF-STINR, PMF-STGR reduces reconstruction time by 50% while reconstructing less blurred images with better motion accuracy. With improved efficiency and accuracy, PMF-STGR enhances the applicability of dynamic CBCT imaging for potential clinical translation.</p>","PeriodicalId":93888,"journal":{"name":"ArXiv","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11975309/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143804997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Geeling Chau, Christopher Wang, Sabera Talukder, Vighnesh Subramaniam, Saraswati Soedarmadji, Yisong Yue, Boris Katz, Andrei Barbu
We present a self-supervised framework that learns population-level codes for arbitrary ensembles of neural recordings at scale. We address key challenges in scaling models with neural time-series data, namely, sparse and variable electrode distribution across subjects and datasets. The Population Transformer (PopT) stacks on top of pretrained temporal embeddings and enhances downstream decoding by enabling learned aggregation of multiple spatially-sparse data channels. The pretrained PopT lowers the amount of data required for downstream decoding experiments, while increasing accuracy, even on held-out subjects and tasks. Compared to end-to-end methods, this approach is computationally lightweight, while achieving similar or better decoding performance. We further show how our framework is generalizable to multiple time-series embeddings and neural data modalities. Beyond decoding, we interpret the pretrained and fine-tuned PopT models to show how they can be used to extract neuroscience insights from large amounts of data. We release our code as well as a pretrained PopT to enable off-the-shelf improvements in multi-channel intracranial data decoding and interpretability. Code is available at https://github.com/czlwang/PopulationTransformer.
{"title":"Population Transformer: Learning Population-level Representations of Neural Activity.","authors":"Geeling Chau, Christopher Wang, Sabera Talukder, Vighnesh Subramaniam, Saraswati Soedarmadji, Yisong Yue, Boris Katz, Andrei Barbu","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>We present a self-supervised framework that learns population-level codes for arbitrary ensembles of neural recordings at scale. We address key challenges in scaling models with neural time-series data, namely, sparse and variable electrode distribution across subjects and datasets. The Population Transformer (PopT) stacks on top of pretrained temporal embeddings and enhances downstream decoding by enabling learned aggregation of multiple spatially-sparse data channels. The pretrained PopT lowers the amount of data required for downstream decoding experiments, while increasing accuracy, even on held-out subjects and tasks. Compared to end-to-end methods, this approach is computationally lightweight, while achieving similar or better decoding performance. We further show how our framework is generalizable to multiple time-series embeddings and neural data modalities. Beyond decoding, we interpret the pretrained and fine-tuned PopT models to show how they can be used to extract neuroscience insights from large amounts of data. We release our code as well as a pretrained PopT to enable off-the-shelf improvements in multi-channel intracranial data decoding and interpretability. Code is available at https://github.com/czlwang/PopulationTransformer.</p>","PeriodicalId":93888,"journal":{"name":"ArXiv","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11177958/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141332662","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Haresh Rengaraj Rajamohan, Richard Kijowski, Kyunghyun Cho, Cem M Deniz
We developed deep learning models for predicting Total Knee Replacement (TKR) need within various time horizons in knee osteoarthritis patients, with a novel capability: the models can perform TKR prediction using a single scan, and furthermore when a previous scan is available, they leverage a progressive risk formulation to improve their predictions. Unlike conventional approaches that treat each scan of a patient independently, our method incorporates a constraint based on disease's progressive nature, ensuring that predicted TKR risk either increases or remains stable over time when multiple scans of a knee are available. This was achieved by enforcing a progressive risk formulation constraint during training with patients who have more than one available scan in the studies. Knee radiographs and MRIs from the Osteoarthritis Initiative (OAI) and Multicenter Osteoarthritis Study (MOST) were used in this work and deep learning models were trained to predict TKR within 1, 2, and 4-year time periods. The proposed approach, utilizing a dual-model risk constraint architecture, demonstrated superior performance compared to baseline - conventional models trained with standard binary cross entropy loss. It achieved an AUROC of 0.87 and AUPRC of 0.47 for 1-year TKR prediction on the OAI radiograph test set, considerably improving over the baseline AUROC of 0.79 and AUPRC of 0.34. For the MOST radiograph test set, the proposed approach achieved an AUROC of 0.77 and AUPRC of 0.25 for 1-year predictions, outperforming the baseline AUROC of 0.71 and AUPRC of 0.19. Similar trends were observed in the MRI testsets.
{"title":"A Progressive Risk Formulation for Enhanced Deep Learning based Total Knee Replacement Prediction in Knee Osteoarthritis.","authors":"Haresh Rengaraj Rajamohan, Richard Kijowski, Kyunghyun Cho, Cem M Deniz","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>We developed deep learning models for predicting Total Knee Replacement (TKR) need within various time horizons in knee osteoarthritis patients, with a novel capability: the models can perform TKR prediction using a single scan, and furthermore when a previous scan is available, they leverage a progressive risk formulation to improve their predictions. Unlike conventional approaches that treat each scan of a patient independently, our method incorporates a constraint based on disease's progressive nature, ensuring that predicted TKR risk either increases or remains stable over time when multiple scans of a knee are available. This was achieved by enforcing a progressive risk formulation constraint during training with patients who have more than one available scan in the studies. Knee radiographs and MRIs from the Osteoarthritis Initiative (OAI) and Multicenter Osteoarthritis Study (MOST) were used in this work and deep learning models were trained to predict TKR within 1, 2, and 4-year time periods. The proposed approach, utilizing a dual-model risk constraint architecture, demonstrated superior performance compared to baseline - conventional models trained with standard binary cross entropy loss. It achieved an AUROC of 0.87 and AUPRC of 0.47 for 1-year TKR prediction on the OAI radiograph test set, considerably improving over the baseline AUROC of 0.79 and AUPRC of 0.34. For the MOST radiograph test set, the proposed approach achieved an AUROC of 0.77 and AUPRC of 0.25 for 1-year predictions, outperforming the baseline AUROC of 0.71 and AUPRC of 0.19. Similar trends were observed in the MRI testsets.</p>","PeriodicalId":93888,"journal":{"name":"ArXiv","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11975308/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143804986","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We have previously introduced Spectral Diffusion Posterior Sampling (Spectral DPS) as a framework for accurate one-step material decomposition by integrating analytic spectral system models with priors learned from large datasets. This work extends the 2D Spectral DPS algorithm to 3D by addressing potentially limiting large-memory requirements with a pre-trained 2D diffusion model for slice-by-slice processing and a compressed polychromatic forward model to ensure accurate physical modeling. Simulation studies demonstrate that the proposed memory-efficient 3D Spectral DPS enables material decomposition of clinically significant volume sizes. Quantitative analysis reveals that Spectral DPS outperforms other deep-learning algorithms, such as InceptNet and conditional DDPM in contrast quantification, inter-slice continuity, and resolution preservation. This study establishes a foundation for advancing one-step material decomposition in volumetric spectral CT.
{"title":"Volumetric Material Decomposition Using Spectral Diffusion Posterior Sampling with a Compressed Polychromatic Forward Model.","authors":"Xiao Jiang, Grace J Gang, J Webster Stayman","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>We have previously introduced Spectral Diffusion Posterior Sampling (Spectral DPS) as a framework for accurate one-step material decomposition by integrating analytic spectral system models with priors learned from large datasets. This work extends the 2D Spectral DPS algorithm to 3D by addressing potentially limiting large-memory requirements with a pre-trained 2D diffusion model for slice-by-slice processing and a compressed polychromatic forward model to ensure accurate physical modeling. Simulation studies demonstrate that the proposed memory-efficient 3D Spectral DPS enables material decomposition of clinically significant volume sizes. Quantitative analysis reveals that Spectral DPS outperforms other deep-learning algorithms, such as InceptNet and conditional DDPM in contrast quantification, inter-slice continuity, and resolution preservation. This study establishes a foundation for advancing one-step material decomposition in volumetric spectral CT.</p>","PeriodicalId":93888,"journal":{"name":"ArXiv","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11975312/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143805064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yuhao Yan, R Adam Bayliss, Florian Wiesinger, Jose de Arcos Rodriguez, Adam R Burr, Andrew M Baschnagel, Brett A Morris, Carri K Glide-Hurst
Purpose: To evaluate a Deep-Learning-enhanced MUlti-PArametric MR sequence (DL-MUPA) for treatment response assessment for brain metastases patients undergoing stereotactic radiosurgery (SRS) and head-and-neck (HnN) cancer patients undergoing conventionally fractionation adaptive radiation therapy.
Methods: DL-MUPA derives quantitative T1 and T2 maps from a single 4-6-minute scan denoised via DL method using dictionary fitting. Phantom benchmarking was performed on a NIST-ISMRM phantom. Longitudinal patient data were acquired on a 1.5T MR-simulator, including pre-treatment (PreTx) and every 3 months after SRS (PostTx) in brain, and PreTx, mid-treatment and 3 months PostTx in HnN. Changes of mean T1 and T2 values were calculated within gross tumor volumes (GTVs), residual disease (RD, HnN), parotids, and submandibular glands (HnN) for treatment response assessment. Uninvolved normal tissues (normal appearing white matter in brain, masseter in HnN) were evaluated to as control.
Results: Phantom benchmarking showed excellent inter-session repeatability (coefficient of variance <1% for T1, <7% for T2). Uninvolved normal tissue suggested acceptable in-vivo repeatability (brain |$Delta$|<5%, HnN |$Delta$T1|<7%, |$Delta$T2|<18% (4ms)). Remarkable changes were noted in resolved brain metastasis ($Delta$T1=14%) and necrotic settings ($Delta$T1=18-40%, $Delta$T2=9-41%). In HnN, two primary tumors showed T2 increase (PostTx GTV $Delta$T2>13%, RD $Delta$T2>18%). A nodal disease resolved PostTx (GTV $Delta$T1=-40%, $Delta$T2=-33%, RD $Delta$T1=-29%, $Delta$T2=-35%). Enhancement was found in involved parotids (PostTx $Delta$T1>12%, $Delta$T2>13%) and submandibular glands (PostTx $Delta$T1>15%, $Delta$T2>35%) while the uninvolved organs remained stable.
Conclusions: DL-MUPA shows promise for treatment response assessment and identifying potential endpoints for functional sparing.
{"title":"Evaluation of a Novel Quantitative Multiparametric MR Sequence for Radiation Therapy Treatment Response Assessment.","authors":"Yuhao Yan, R Adam Bayliss, Florian Wiesinger, Jose de Arcos Rodriguez, Adam R Burr, Andrew M Baschnagel, Brett A Morris, Carri K Glide-Hurst","doi":"","DOIUrl":"","url":null,"abstract":"<p><strong>Purpose: </strong>To evaluate a Deep-Learning-enhanced MUlti-PArametric MR sequence (DL-MUPA) for treatment response assessment for brain metastases patients undergoing stereotactic radiosurgery (SRS) and head-and-neck (HnN) cancer patients undergoing conventionally fractionation adaptive radiation therapy.</p><p><strong>Methods: </strong>DL-MUPA derives quantitative T1 and T2 maps from a single 4-6-minute scan denoised via DL method using dictionary fitting. Phantom benchmarking was performed on a NIST-ISMRM phantom. Longitudinal patient data were acquired on a 1.5T MR-simulator, including pre-treatment (PreTx) and every 3 months after SRS (PostTx) in brain, and PreTx, mid-treatment and 3 months PostTx in HnN. Changes of mean T1 and T2 values were calculated within gross tumor volumes (GTVs), residual disease (RD, HnN), parotids, and submandibular glands (HnN) for treatment response assessment. Uninvolved normal tissues (normal appearing white matter in brain, masseter in HnN) were evaluated to as control.</p><p><strong>Results: </strong>Phantom benchmarking showed excellent inter-session repeatability (coefficient of variance <1% for T1, <7% for T2). Uninvolved normal tissue suggested acceptable in-vivo repeatability (brain |$Delta$|<5%, HnN |$Delta$T1|<7%, |$Delta$T2|<18% (4ms)). Remarkable changes were noted in resolved brain metastasis ($Delta$T1=14%) and necrotic settings ($Delta$T1=18-40%, $Delta$T2=9-41%). In HnN, two primary tumors showed T2 increase (PostTx GTV $Delta$T2>13%, RD $Delta$T2>18%). A nodal disease resolved PostTx (GTV $Delta$T1=-40%, $Delta$T2=-33%, RD $Delta$T1=-29%, $Delta$T2=-35%). Enhancement was found in involved parotids (PostTx $Delta$T1>12%, $Delta$T2>13%) and submandibular glands (PostTx $Delta$T1>15%, $Delta$T2>35%) while the uninvolved organs remained stable.</p><p><strong>Conclusions: </strong>DL-MUPA shows promise for treatment response assessment and identifying potential endpoints for functional sparing.</p>","PeriodicalId":93888,"journal":{"name":"ArXiv","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11975303/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143805011","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yuganthi R Liyanage, Gerardo Chowell, Gleb Pogudin, Necibe Tuncer
Phenomenological models are highly effective tools for forecasting disease dynamics using real world data, particularly in scenarios where detailed knowledge of disease mechanisms is limited. However, their reliability depends on the model parameters' structural and practical identifiability. In this study, we systematically analyze the identifiability of six commonly used growth models in epidemiology:the generalized growth model, the generalized logistic model, the Richards model, the generalized Richards model, the Gompertz model, and a modified SEIR model with inhomogeneous mixing. To address challenges posed by non-integer power exponents in these models, we reformulate them by introducing additional state variables. This enables rigorous structural identifiability analysis using the StructuralIdentifiability.jl package in JULIA. We validate the structural identifiability results by performing parameter estimation and forecasting using the GrowthPredict MATLAB toolbox. This toolbox is designed to fit and forecast time series trajectories based on phenomenological growth models. We applied it to three epidemiological datasets: weekly incidence data for monkeypox, COVID 19, and Ebola. Additionally, we assess practical identifiability through Monte Carlo simulations to evaluate parameter estimation robustness under varying levels of observational noise. Our results confirm that all six models are structurally identifiable under the proposed reformulation. Furthermore, practical identifiability analyses demonstrate that parameter estimates remain robust across different noise levels, though sensitivity varies by model and dataset. These findings provide critical insights into the strengths and limitations of phenomenological models to characterize epidemic trajectories, emphasizing their adaptability to real world challenges and their role in informing public health interventions.
{"title":"Structural and Practical Identifiability of Phenomenological Growth Models for Epidemic Forecasting.","authors":"Yuganthi R Liyanage, Gerardo Chowell, Gleb Pogudin, Necibe Tuncer","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Phenomenological models are highly effective tools for forecasting disease dynamics using real world data, particularly in scenarios where detailed knowledge of disease mechanisms is limited. However, their reliability depends on the model parameters' structural and practical identifiability. In this study, we systematically analyze the identifiability of six commonly used growth models in epidemiology:the generalized growth model, the generalized logistic model, the Richards model, the generalized Richards model, the Gompertz model, and a modified SEIR model with inhomogeneous mixing. To address challenges posed by non-integer power exponents in these models, we reformulate them by introducing additional state variables. This enables rigorous structural identifiability analysis using the StructuralIdentifiability.jl package in JULIA. We validate the structural identifiability results by performing parameter estimation and forecasting using the GrowthPredict MATLAB toolbox. This toolbox is designed to fit and forecast time series trajectories based on phenomenological growth models. We applied it to three epidemiological datasets: weekly incidence data for monkeypox, COVID 19, and Ebola. Additionally, we assess practical identifiability through Monte Carlo simulations to evaluate parameter estimation robustness under varying levels of observational noise. Our results confirm that all six models are structurally identifiable under the proposed reformulation. Furthermore, practical identifiability analyses demonstrate that parameter estimates remain robust across different noise levels, though sensitivity varies by model and dataset. These findings provide critical insights into the strengths and limitations of phenomenological models to characterize epidemic trajectories, emphasizing their adaptability to real world challenges and their role in informing public health interventions.</p>","PeriodicalId":93888,"journal":{"name":"ArXiv","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11957228/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143756438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Iris H R Yoon, Gregory Henselman-Petrusek, Yiyi Yu, Robert Ghrist, Spencer LaVere Smith, Chad Giusti
Neural manifolds summarize the intrinsic structure of the information encoded by a population of neurons. Advances in experimental techniques have made simultaneous recordings from multiple brain regions increasingly commonplace, raising the possibility of studying how these manifolds relate across populations. However, when the manifolds are nonlinear and possibly code for multiple unknown variables, it is challenging to extract robust and falsifiable information about their relationships. We introduce a framework, called the method of analogous cycles, for matching topological features of neural manifolds using only observed dissimilarity matrices within and between neural populations. We demonstrate via analysis of simulations and emph{in vivo} experimental data that this method can be used to correctly identify multiple shared circular coordinate systems across both stimuli and inferred neural manifolds. Conversely, the method rejects matching features that are not intrinsic to one of the systems. Further, as this method is deterministic and does not rely on dimensionality reduction or optimization methods, it is amenable to direct mathematical investigation and interpretation in terms of the underlying neural activity. We thus propose the method of analogous cycles as a suitable foundation for a theory of cross-population analysis via neural manifolds.
{"title":"Tracking the topology of neural manifolds across populations.","authors":"Iris H R Yoon, Gregory Henselman-Petrusek, Yiyi Yu, Robert Ghrist, Spencer LaVere Smith, Chad Giusti","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Neural manifolds summarize the intrinsic structure of the information encoded by a population of neurons. Advances in experimental techniques have made simultaneous recordings from multiple brain regions increasingly commonplace, raising the possibility of studying how these manifolds relate across populations. However, when the manifolds are nonlinear and possibly code for multiple unknown variables, it is challenging to extract robust and falsifiable information about their relationships. We introduce a framework, called the method of analogous cycles, for matching topological features of neural manifolds using only observed dissimilarity matrices within and between neural populations. We demonstrate via analysis of simulations and emph{in vivo} experimental data that this method can be used to correctly identify multiple shared circular coordinate systems across both stimuli and inferred neural manifolds. Conversely, the method rejects matching features that are not intrinsic to one of the systems. Further, as this method is deterministic and does not rely on dimensionality reduction or optimization methods, it is amenable to direct mathematical investigation and interpretation in terms of the underlying neural activity. We thus propose the method of analogous cycles as a suitable foundation for a theory of cross-population analysis via neural manifolds.</p>","PeriodicalId":93888,"journal":{"name":"ArXiv","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11975052/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143804999","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}