Nimrod Sherf, Si Tang, Dylan Hafner, Jonathan D Touboul, Xaq Pitkow, Kevin E Bassler, Krešimir Josić
Neural circuits exhibit structured connectivity, including an overrepresentation of reciprocal connections between neuron pairs. Despite important advances, a full understanding of how such partial symmetry in connectivity shapes neural dynamics remains elusive. Here we ask how correlations between reciprocal connections in a random, recurrent neural network affect phase-space complexity, defined as the exponential proliferation rate (with network size) of the number of fixed points that accompanies the transition to chaotic dynamics. We find a striking pattern: partial anti-symmetry strongly amplifies complexity, while partial symmetry suppresses it. These opposing trends closely track changes in other measures of dynamical behavior, such as dimensionality, Lyapunov exponents, and transient path length, supporting the view that fixed-point structure is a key determinant of network dynamics. Thus, positive reciprocal correlations favor low-dimensional, slowly varying activity, whereas negative correlations promote high-dimensional, rapidly fluctuating chaotic activity. These results yield testable predictions about the link between connection reciprocity, neural dynamics and function.
{"title":"Complexity and dynamics of partially symmetric random neural networks.","authors":"Nimrod Sherf, Si Tang, Dylan Hafner, Jonathan D Touboul, Xaq Pitkow, Kevin E Bassler, Krešimir Josić","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Neural circuits exhibit structured connectivity, including an overrepresentation of reciprocal connections between neuron pairs. Despite important advances, a full understanding of how such partial symmetry in connectivity shapes neural dynamics remains elusive. Here we ask how correlations between reciprocal connections in a random, recurrent neural network affect phase-space complexity, defined as the exponential proliferation rate (with network size) of the number of fixed points that accompanies the transition to chaotic dynamics. We find a striking pattern: partial anti-symmetry strongly amplifies complexity, while partial symmetry suppresses it. These opposing trends closely track changes in other measures of dynamical behavior, such as dimensionality, Lyapunov exponents, and transient path length, supporting the view that fixed-point structure is a key determinant of network dynamics. Thus, positive reciprocal correlations favor low-dimensional, slowly varying activity, whereas negative correlations promote high-dimensional, rapidly fluctuating chaotic activity. These results yield testable predictions about the link between connection reciprocity, neural dynamics and function.</p>","PeriodicalId":93888,"journal":{"name":"ArXiv","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12772704/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145918962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Objective: To develop a fast and accurate deterministic algorithm for calculation of dose and fluence spectra distributions for treatment planning in proton beam therapy. To evaluate algorithm performance for calculations in water for protons in the therapeutic energy range.
Approach: We solve the Boltzmann transport equation using an iterative procedure. Our algorithm accounts for Coulomb scattering and nuclear reactions. It uses the same physical models, as do the most rigorous Monte Carlo systems. Thereby it achieves the same low level of systematic errors. Our solver does not involve random sampling. The solution is not contaminated by statistical noise. This means that the overall uncertainties of our solver are lower than those realistically achievable with Monte Carlo. Furthermore, our solver is orders of magnitude faster. Its another advantage is that it calculates fluence spectra. They are needed for calculation of relative biological effectiveness, especially when advanced radiobiological models are used that may present a challenge for other algorithms.
Main results: We have developed a novel Boltzmann equation solver, have written prototype software, and completed its testing for calculations in water. For 40-220 MeV protons we calculated fluence spectra, depth doses, three-dimensional dose distributions for narrow Gaussian beams. The CPU time was 5-11 ms for depth doses and fluence spectra at multiple depths. Gaussian beam calculations took 31-78 ms. All the calculations were run on a single Intel i7 2.9 GHz CPU. Comparison of our solver with Geant4 showed good agreement for all energies and depths. For the 1%/1 mm -test the pass rate was 0.95-0.99. In this test, 1% was the difference between our and Geant4 doses at the same point. The test included low dose regions down to 1% of the maximum dose.
Significance: Results of the study provide a foundation for achieving a high computing speed with uncompromised accuracy in proton treatment planning systems.
{"title":"A novel Boltzmann equation solver for calculation of dose and fluence spectra distributions for proton beam therapy.","authors":"Oleg N Vassiliev, Radhe Mohan","doi":"","DOIUrl":"","url":null,"abstract":"<p><strong>Objective: </strong>To develop a fast and accurate deterministic algorithm for calculation of dose and fluence spectra distributions for treatment planning in proton beam therapy. To evaluate algorithm performance for calculations in water for protons in the therapeutic energy range.</p><p><strong>Approach: </strong>We solve the Boltzmann transport equation using an iterative procedure. Our algorithm accounts for Coulomb scattering and nuclear reactions. It uses the same physical models, as do the most rigorous Monte Carlo systems. Thereby it achieves the same low level of systematic errors. Our solver does not involve random sampling. The solution is not contaminated by statistical noise. This means that the overall uncertainties of our solver are lower than those realistically achievable with Monte Carlo. Furthermore, our solver is orders of magnitude faster. Its another advantage is that it calculates fluence spectra. They are needed for calculation of relative biological effectiveness, especially when advanced radiobiological models are used that may present a challenge for other algorithms.</p><p><strong>Main results: </strong>We have developed a novel Boltzmann equation solver, have written prototype software, and completed its testing for calculations in water. For 40-220 MeV protons we calculated fluence spectra, depth doses, three-dimensional dose distributions for narrow Gaussian beams. The CPU time was 5-11 ms for depth doses and fluence spectra at multiple depths. Gaussian beam calculations took 31-78 ms. All the calculations were run on a single Intel i7 2.9 GHz CPU. Comparison of our solver with Geant4 showed good agreement for all energies and depths. For the 1%/1 mm <math><mrow><mi>γ</mi></mrow> </math> -test the pass rate was 0.95-0.99. In this test, 1% was the difference between our and Geant4 doses at the same point. The test included low dose regions down to 1% of the maximum dose.</p><p><strong>Significance: </strong>Results of the study provide a foundation for achieving a high computing speed with uncompromised accuracy in proton treatment planning systems.</p>","PeriodicalId":93888,"journal":{"name":"ArXiv","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12772697/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145918945","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Siqi Li, Benjamin A Spencer, Yiran Wang, Yasser G Abdelhafez, Heather Hunt, J Anthony Seibert, Simon R Cherry, Ramsey D Badawi, Lorenzo Nardo, Guobao Wang
Bone marrow (BM) metabolic quantification with 18F-fluorodeoxyglucose (FDG) positron emission tomography (PET) is of broad clinical significance for accurate assessment of BM at staging and follow-up, especially when immunotherapy is involved. However, current methods of quantifying BM may be inaccurate because the volume defined to measure bone marrow may also consist of a fraction of trabecular bone in which 18F-FDG activity is negligible, resulting in a potential underestimation of true BM uptake. In this study, we demonstrate this bone-led tissue composition effect and propose a bone fraction correction (BFC) method using X-ray dual-energy computed tomography (DECT) material decomposition. This study included ten scans from five cancer patients who underwent baseline and follow-up dynamic 18F-FDG PET and DECT scans using the uEXPLORER total-body PET/CT system. The voxel-wise bone volume fraction was estimated from DECT and then incorporated into the PET measurement formulas for BFC. The standardized uptake value (SUV), 18F-FDG delivery rate K1, and net influx rate Ki values in BM regions were estimated with and without BFC and compared using the statistical analysis. The results first demonstrated the feasibility of performing voxel-wise material decomposition using DECT for metabolic BM imaging. With BFC, the SUV, K1, and Ki values significantly increased by an average of 13.28% in BM regions compared to those without BFC (all P<0.0001), indicating the impact of BFC for BM quantification. Parametric imaging with BFC further confirmed regional analysis. Our study using DECT suggests current SUV and kinetic quantification of BM are likely underestimated in PET due to the presence of a significant bone volume fraction. Incorporating tissue composition information through BFC may improve BM metabolic quantification.
{"title":"Incorporating Tissue Composition Information in Total-Body PET Metabolic Quantification of Bone Marrow through Dual-Energy CT.","authors":"Siqi Li, Benjamin A Spencer, Yiran Wang, Yasser G Abdelhafez, Heather Hunt, J Anthony Seibert, Simon R Cherry, Ramsey D Badawi, Lorenzo Nardo, Guobao Wang","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Bone marrow (BM) metabolic quantification with 18F-fluorodeoxyglucose (FDG) positron emission tomography (PET) is of broad clinical significance for accurate assessment of BM at staging and follow-up, especially when immunotherapy is involved. However, current methods of quantifying BM may be inaccurate because the volume defined to measure bone marrow may also consist of a fraction of trabecular bone in which 18F-FDG activity is negligible, resulting in a potential underestimation of true BM uptake. In this study, we demonstrate this bone-led tissue composition effect and propose a bone fraction correction (BFC) method using X-ray dual-energy computed tomography (DECT) material decomposition. This study included ten scans from five cancer patients who underwent baseline and follow-up dynamic 18F-FDG PET and DECT scans using the uEXPLORER total-body PET/CT system. The voxel-wise bone volume fraction was estimated from DECT and then incorporated into the PET measurement formulas for BFC. The standardized uptake value (SUV), 18F-FDG delivery rate K1, and net influx rate Ki values in BM regions were estimated with and without BFC and compared using the statistical analysis. The results first demonstrated the feasibility of performing voxel-wise material decomposition using DECT for metabolic BM imaging. With BFC, the SUV, K1, and Ki values significantly increased by an average of 13.28% in BM regions compared to those without BFC (all P<0.0001), indicating the impact of BFC for BM quantification. Parametric imaging with BFC further confirmed regional analysis. Our study using DECT suggests current SUV and kinetic quantification of BM are likely underestimated in PET due to the presence of a significant bone volume fraction. Incorporating tissue composition information through BFC may improve BM metabolic quantification.</p>","PeriodicalId":93888,"journal":{"name":"ArXiv","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-12-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12772706/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145919172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Integrating non-Euclidean brain imaging data with Euclidean tabular data, such as clinical and demographic information, poses a substantial challenge for medical imaging analysis, particularly in forecasting future outcomes. While machine learning and deep learning techniques have been applied successfully to cross-sectional classification and prediction tasks, effectively forecasting outcomes in longitudinal imaging studies remains challenging. To address this challenge, we introduce a time-aware graph neural network model with transformer fusion (GNN-TF). This model flexibly integrates both tabular data and dynamic brain connectivity data, leveraging the temporal order of these variables within a coherent framework. By incorporating non-Euclidean and Euclidean sources of information from a longitudinal resting-state fMRI dataset from the National Consortium on Alcohol and Neurodevelopment in Adolescence (NCANDA), the GNN-TF enables a comprehensive analysis that captures critical aspects of longitudinal imaging data. Comparative analyses against a variety of established machine learning and deep learning models demonstrate that GNN-TF outperforms these state-of-the-art methods, delivering superior predictive accuracy for predicting future tobacco usage. The end-to-end, time-aware transformer fusion structure of the proposed GNN-TF model successfully integrates multiple data modalities and leverages temporal dynamics, making it a valuable analytic tool for functional brain imaging studies focused on clinical outcome prediction.
{"title":"Graph Neural Networks with Transformer Fusion of Brain Connectivity Dynamics and Tabular Data for Forecasting Future Tobacco Use.","authors":"Runzhi Zhou, Xi Luo","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Integrating non-Euclidean brain imaging data with Euclidean tabular data, such as clinical and demographic information, poses a substantial challenge for medical imaging analysis, particularly in forecasting future outcomes. While machine learning and deep learning techniques have been applied successfully to cross-sectional classification and prediction tasks, effectively forecasting outcomes in longitudinal imaging studies remains challenging. To address this challenge, we introduce a time-aware graph neural network model with transformer fusion (GNN-TF). This model flexibly integrates both tabular data and dynamic brain connectivity data, leveraging the temporal order of these variables within a coherent framework. By incorporating non-Euclidean and Euclidean sources of information from a longitudinal resting-state fMRI dataset from the National Consortium on Alcohol and Neurodevelopment in Adolescence (NCANDA), the GNN-TF enables a comprehensive analysis that captures critical aspects of longitudinal imaging data. Comparative analyses against a variety of established machine learning and deep learning models demonstrate that GNN-TF outperforms these state-of-the-art methods, delivering superior predictive accuracy for predicting future tobacco usage. The end-to-end, time-aware transformer fusion structure of the proposed GNN-TF model successfully integrates multiple data modalities and leverages temporal dynamics, making it a valuable analytic tool for functional brain imaging studies focused on clinical outcome prediction.</p>","PeriodicalId":93888,"journal":{"name":"ArXiv","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-12-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12772699/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145919009","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Measurements of cell size dynamics have established the adder principle as a robust mechanism of cell size homeostasis. In this framework, cells add a nearly constant amount of size during each cell cycle, independent of their size at birth. Theoretical studies have shown that the adder principle can be achieved when cell-cycle progression is coupled to cell size. Here, we extend this framework by considering a general growth law modeled as a Hill-type function of cell size. This assumption introduces growth saturation to the model, such that very large cells grow approximately linearly rather than exponentially. Additionally, to capture the sequential nature of division, we implement a stochastic multi-step adder model in which cells progress through internal regulatory stages before dividing. From this model, we derive exact analytical expressions for the moments of cell size distributions. Our results show that stronger growth saturation increases the mean cell size in steady state, while slightly reducing fluctuations compared to exponential growth. Importantly, despite these changes, the adder property is preserved. This emphasizes that the reduction in size variability is a consequence of the growth law rather than simple scaling with mean size. Finally, we analyze stochastic clonal proliferation and find that growth saturation influences both single-cell size statistics and variability across populations. Our results provide a generalized framework for connecting multi-step adder mechanisms with proliferation dynamics, extending size control theory beyond exponential growth.
{"title":"Stochastic multi-step cell size homeostasis model for cycling human cells.","authors":"Sayeh Rezaee, Cesar Nieto, Abhyudai Singh","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Measurements of cell size dynamics have established the adder principle as a robust mechanism of cell size homeostasis. In this framework, cells add a nearly constant amount of size during each cell cycle, independent of their size at birth. Theoretical studies have shown that the adder principle can be achieved when cell-cycle progression is coupled to cell size. Here, we extend this framework by considering a general growth law modeled as a Hill-type function of cell size. This assumption introduces growth saturation to the model, such that very large cells grow approximately linearly rather than exponentially. Additionally, to capture the sequential nature of division, we implement a stochastic multi-step adder model in which cells progress through internal regulatory stages before dividing. From this model, we derive exact analytical expressions for the moments of cell size distributions. Our results show that stronger growth saturation increases the mean cell size in steady state, while slightly reducing fluctuations compared to exponential growth. Importantly, despite these changes, the adder property is preserved. This emphasizes that the reduction in size variability is a consequence of the growth law rather than simple scaling with mean size. Finally, we analyze stochastic clonal proliferation and find that growth saturation influences both single-cell size statistics and variability across populations. Our results provide a generalized framework for connecting multi-step adder mechanisms with proliferation dynamics, extending size control theory beyond exponential growth.</p>","PeriodicalId":93888,"journal":{"name":"ArXiv","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-12-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12772695/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145919191","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Miguel E Wimbish, Nicole K Guittari, Victoria A Rose, Jorge L Rivera, Patricia K Rivlin, Mark A Hinton, Jordan K Matelsky, Nicole E Stock, Brock A Wester, Erik C Johnson, William R Gray-Roncal
High resolution volumetric neuroimaging datasets from electron microscopy (EM) and x-ray micro and holographic-nano tomography (XRM/XHN) are being generated at an increasing rate and by a growing number of research teams. These datasets are derived from an increasing number of species, in an increasing number of brain regions, and with an increasing number of techniques. Each of these large-scale datasets, often surpassing petascale levels, is typically accompanied by a unique and varied set of metadata. These datasets can be used to derive connectomes, or neuron-synapse level connectivity diagrams, to investigate the fundamental organization of neural circuitry, neuronal development, and neurodegenerative disease. Standardization is essential to facilitate comparative connectomics analysis and enhance data utilization. Although the neuroinformatics community has successfully established and adopted data standards for many modalities, this effort has not yet encompassed EM and XRM/ XHN connectomics data. This lack of standardization isolates these datasets, hindering their integration and comparison with other research performed in the field. Towards this end, our team formed a working group consisting of community stakeholders to develop Image and Experimental Metadata Standards for EM and XRM/XHN data to ensure the scientific impact and further motivate the generation and sharing of these data. This document addresses version 1.1 of these standards, aiming to support metadata services and future software designs for community collaboration. Standards for derived annotations are described in a companion document. Standards definitions are available on a community github page. We hope these standards will enable comparative analysis, improve interoperability between connectomics software tools, and continue to be refined and improved by the neuroinformatics community.
{"title":"EM and XRM Connectomics Imaging and Experimental Metadata Standards.","authors":"Miguel E Wimbish, Nicole K Guittari, Victoria A Rose, Jorge L Rivera, Patricia K Rivlin, Mark A Hinton, Jordan K Matelsky, Nicole E Stock, Brock A Wester, Erik C Johnson, William R Gray-Roncal","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>High resolution volumetric neuroimaging datasets from electron microscopy (EM) and x-ray micro and holographic-nano tomography (XRM/XHN) are being generated at an increasing rate and by a growing number of research teams. These datasets are derived from an increasing number of species, in an increasing number of brain regions, and with an increasing number of techniques. Each of these large-scale datasets, often surpassing petascale levels, is typically accompanied by a unique and varied set of metadata. These datasets can be used to derive connectomes, or neuron-synapse level connectivity diagrams, to investigate the fundamental organization of neural circuitry, neuronal development, and neurodegenerative disease. Standardization is essential to facilitate comparative connectomics analysis and enhance data utilization. Although the neuroinformatics community has successfully established and adopted data standards for many modalities, this effort has not yet encompassed EM and XRM/ XHN connectomics data. This lack of standardization isolates these datasets, hindering their integration and comparison with other research performed in the field. Towards this end, our team formed a working group consisting of community stakeholders to develop Image and Experimental Metadata Standards for EM and XRM/XHN data to ensure the scientific impact and further motivate the generation and sharing of these data. This document addresses version 1.1 of these standards, aiming to support metadata services and future software designs for community collaboration. Standards for derived annotations are described in a companion document. Standards definitions are available on a community github page. We hope these standards will enable comparative analysis, improve interoperability between connectomics software tools, and continue to be refined and improved by the neuroinformatics community.</p>","PeriodicalId":93888,"journal":{"name":"ArXiv","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-12-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12772707/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145918950","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sean Campbell, Courtney C White, Amanda M Alexander, William Ott
Delay is an inherent feature of genetic regulatory networks. It represents the time required for the assembly of functional regulator proteins. The protein production process is complex, as it includes transcription, translocation, translation, folding, and oligomerization. Because these steps are noisy, the resulting delay associated with protein production is distributed (random). We here consider how distributed delay impacts the dynamics of bistable genetic circuits. We show that for a variety of genetic circuits that exhibit bistability, increasing the noise level in the delay distribution dramatically stabilizes the metastable states. By this we mean that mean residence times in the metastable states dramatically increase.
Relevance to life sciences: Bistable genetic regulatory networks are ubiquitous in living organisms. Evolutionary processes seem to have tuned such networks so that they switch between metastable states when it is important to do so, but small fluctuations do not cause unwanted switching. Understanding how evolution has tuned the stability of biological switches is an important problem. In particular, such understanding can guide the design of forward-engineered synthetic bistable genetic regulatory networks.
Mathematical content: We use two methods to explain this stabilization phenomenon. First, we introduce and simulate stochastic hybrid models that depend on a switching-rate parameter. These stochastic hybrid models allow us to unfold the distributed-delay models in the sense that, in certain cases, the distributed-delay model can be viewed as a fast-switching limit of the corresponding stochastic hybrid model. Second, we generalize the three-states model, a symbolic model of bistability, and analyze this extension.
{"title":"DISTRIBUTED DELAY STABILIZES BISTABLE GENETIC NETWORKS.","authors":"Sean Campbell, Courtney C White, Amanda M Alexander, William Ott","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Delay is an inherent feature of genetic regulatory networks. It represents the time required for the assembly of functional regulator proteins. The protein production process is complex, as it includes transcription, translocation, translation, folding, and oligomerization. Because these steps are noisy, the resulting delay associated with protein production is distributed (random). We here consider how distributed delay impacts the dynamics of bistable genetic circuits. We show that for a variety of genetic circuits that exhibit bistability, increasing the noise level in the delay distribution dramatically stabilizes the metastable states. By this we mean that mean residence times in the metastable states dramatically increase.</p><p><strong>Relevance to life sciences: </strong>Bistable genetic regulatory networks are ubiquitous in living organisms. Evolutionary processes seem to have tuned such networks so that they switch between metastable states when it is important to do so, but small fluctuations do not cause unwanted switching. Understanding how evolution has tuned the stability of biological switches is an important problem. In particular, such understanding can guide the design of forward-engineered synthetic bistable genetic regulatory networks.</p><p><strong>Mathematical content: </strong>We use two methods to explain this stabilization phenomenon. First, we introduce and simulate stochastic hybrid models that depend on a switching-rate parameter. These stochastic hybrid models allow us to unfold the distributed-delay models in the sense that, in certain cases, the distributed-delay model can be viewed as a fast-switching limit of the corresponding stochastic hybrid model. Second, we generalize the three-states model, a symbolic model of bistability, and analyze this extension.</p>","PeriodicalId":93888,"journal":{"name":"ArXiv","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-12-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12755252/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145890882","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mojtaba Safari, Shansong Wang, Vanessa L Wildman, Mingzhe Hu, Zach Eidex, Chih-Wei Chang, Erik H Middlebrooks, Richard L J Qiu, Pretesh Patel, Ashesh B Jani, Hui Mao, Zhen Tian, Xiaofeng Yang
<p><strong>Background: </strong>High-resolution MRI is essential for accurate diagnosis and treatment planning, but its clinical acquisition is often constrained by long scanning times, which increase patient discomfort and reduce scanner throughput. While super-resolution (SR) techniques offer a post-acquisition solution to enhance resolution, existing deep learning approaches face trade-offs between reconstruction fidelity and computational efficiency, limiting their clinical applicability.</p><p><strong>Purpose: </strong>This study aims to develop an efficient and accurate deep learning framework for MRI super-resolution that preserves fine anatomical detail while maintaining low computational overhead, enabling practical integration into clinical workflows.</p><p><strong>Materials and methods: </strong>We propose a novel SR framework based on multi-head selective state-space models (MHSSM) integrated with a lightweight channel multilayer perceptron (MLP). The model employs 2D patch extraction with hybrid scanning strategies (vertical, horizontal, and diagonal) to capture long-range dependencies while mitigating pixel forgetting. Each MambaFormer block combines MHSSM, depthwise convolutions, and gated channel mixing to balance local and global feature representation. The framework was trained and evaluated on two distinct datasets: 7T brain T1 MP2RAGE maps (142 subjects) and 1.5T prostate T2w MRI (334 subjects). Performance was compared against multiple baselines including Bicubic interpolation, GAN-based (CycleGAN, Pix2pix, SPSR), transformer-based (SwinIR), Mamba-based (MambaIR), and diffusion-based (I<sup>2</sup>SB, Res-SRDiff) methods.</p><p><strong>Results: </strong>The proposed model demonstrated superior performance across all evaluation metrics while maintaining exceptional computational efficiency. On the 7T brain dataset, our method achieved the highest structural similarity (SSIM: 0.951 ± 0.021) and peak signal-to-noise ratio (PSNR: 26.90 ± 1.41 dB), along with the best perceptual quality scores (LPIPS: 0.076 ± 0.022; GMSD: 0.083 ± 0.017). These results represented statistically significant improvements over all baselines (<i>p <</i> 0.001), including a 2.1% SSIM gain over SPSR and a 2.4% PSNR improvement over Res-SRDiff. For the prostate dataset, the model similarly outperformed competing approaches, achieving SSIM of 0.770 ± 0.049, PSNR of 27.15 ± 2.19 dB, LPIPS of 0.190 ± 0.095, and GMSD of 0.087 ± 0.013. Notably, our framework accomplished these results with only 0.9 million parameters and 57 GFLOPs, representing reductions of 99.8% in parameters and 97.5% in computational operations compared to Res-SRDiff, while also substantially outperforming SwinIR and MambaIR in both accuracy and efficiency metrics.</p><p><strong>Conclusion: </strong>The proposed framework provides a computationally efficient yet accurate solution for MRI super-resolution, delivering well-defined anatomical details and improved perceptual fidelity across an
{"title":"Efficient Vision Mamba for MRI Super-Resolution via Hybrid Selective Scanning.","authors":"Mojtaba Safari, Shansong Wang, Vanessa L Wildman, Mingzhe Hu, Zach Eidex, Chih-Wei Chang, Erik H Middlebrooks, Richard L J Qiu, Pretesh Patel, Ashesh B Jani, Hui Mao, Zhen Tian, Xiaofeng Yang","doi":"","DOIUrl":"","url":null,"abstract":"<p><strong>Background: </strong>High-resolution MRI is essential for accurate diagnosis and treatment planning, but its clinical acquisition is often constrained by long scanning times, which increase patient discomfort and reduce scanner throughput. While super-resolution (SR) techniques offer a post-acquisition solution to enhance resolution, existing deep learning approaches face trade-offs between reconstruction fidelity and computational efficiency, limiting their clinical applicability.</p><p><strong>Purpose: </strong>This study aims to develop an efficient and accurate deep learning framework for MRI super-resolution that preserves fine anatomical detail while maintaining low computational overhead, enabling practical integration into clinical workflows.</p><p><strong>Materials and methods: </strong>We propose a novel SR framework based on multi-head selective state-space models (MHSSM) integrated with a lightweight channel multilayer perceptron (MLP). The model employs 2D patch extraction with hybrid scanning strategies (vertical, horizontal, and diagonal) to capture long-range dependencies while mitigating pixel forgetting. Each MambaFormer block combines MHSSM, depthwise convolutions, and gated channel mixing to balance local and global feature representation. The framework was trained and evaluated on two distinct datasets: 7T brain T1 MP2RAGE maps (142 subjects) and 1.5T prostate T2w MRI (334 subjects). Performance was compared against multiple baselines including Bicubic interpolation, GAN-based (CycleGAN, Pix2pix, SPSR), transformer-based (SwinIR), Mamba-based (MambaIR), and diffusion-based (I<sup>2</sup>SB, Res-SRDiff) methods.</p><p><strong>Results: </strong>The proposed model demonstrated superior performance across all evaluation metrics while maintaining exceptional computational efficiency. On the 7T brain dataset, our method achieved the highest structural similarity (SSIM: 0.951 ± 0.021) and peak signal-to-noise ratio (PSNR: 26.90 ± 1.41 dB), along with the best perceptual quality scores (LPIPS: 0.076 ± 0.022; GMSD: 0.083 ± 0.017). These results represented statistically significant improvements over all baselines (<i>p <</i> 0.001), including a 2.1% SSIM gain over SPSR and a 2.4% PSNR improvement over Res-SRDiff. For the prostate dataset, the model similarly outperformed competing approaches, achieving SSIM of 0.770 ± 0.049, PSNR of 27.15 ± 2.19 dB, LPIPS of 0.190 ± 0.095, and GMSD of 0.087 ± 0.013. Notably, our framework accomplished these results with only 0.9 million parameters and 57 GFLOPs, representing reductions of 99.8% in parameters and 97.5% in computational operations compared to Res-SRDiff, while also substantially outperforming SwinIR and MambaIR in both accuracy and efficiency metrics.</p><p><strong>Conclusion: </strong>The proposed framework provides a computationally efficient yet accurate solution for MRI super-resolution, delivering well-defined anatomical details and improved perceptual fidelity across an","PeriodicalId":93888,"journal":{"name":"ArXiv","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-12-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12755253/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145890822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Eugene H Lin, Yishun Zhou, Hsin-Yi Hung, Luke Moon, Andrew Gordus, Chen Li
One of the key functions of organisms is to sense their physical environment so that they can react upon the sensed information appropriately. All spiders can perceive their environment through vibration sensors in their legs, and most spiders rely on substrate-born vibration sensing to detect prey. Orb-weaving spiders primarily sense leg vibrations to detect and locate prey caught on their wheel-shaped webs. Biological experiments and computational modeling elucidated the physics of how these spiders use long-timescale web-building behaviors, which occur before prey capture, to modulate vibration sensing of prey by controlling web geometry, materials, and tension distribution. By contrast, the physics of how spiders use short-timescale leg behaviors to modulate vibration sensing on a web during prey capture is less known. This is in part due to challenges in biological experiments (e.g., having little control over spider behavior, difficulty measuring the whole spider-web-prey system vibrations) and theoretical/computation modeling (e.g., close-form equations intractable for a complex web, high computation cost for simulating vibrations with behaving animals). Here, we use robophysical modeling as a complementary approach to address these challenges and study how dynamic leg crouching behavior common in orb-weaving spiders contributes to vibration sensing of prey on a web. Following observations in the orb-weaver Uloborus diversus from a parallel biological study, we created a robophysical model consisting of a spider robot that can dynamically crouch its legs and sense its leg vibrations and a prey robot that can shake both on a horizontal physical wheel-shaped web. Without the prey robot, after each dynamic crouch, the spider robot sensed leg vibrations with only one dominant frequency-the natural frequency of itself passively vibrating on the web. With the prey robot, after each dynamic crouch, the spider robot sensed leg vibrations with two dominant frequencies-the additional higher frequency being the natural frequency of itself passively vibrating on its spiral thread induced by the spider robot's dynamic crouch. This additional frequency increased as the prey robot became closer from the web center where the spider robot was. These features allowed the spider robot to detect prey presence and distance. We developed a minimalistic physics model that decoupled the spider-web-prey system into two subsystems to explain these observations. Guided by both these results, we found evidence of the same physical mechanism appearing in the web of the U. diversus spider during prey capture in the data from the parallel biological study. Our work demonstrated that robophysical modeling is a useful approach for discovering physical mechanisms of how spiders use short-time scale leg behaviors to enhance vibration sensing of objects on a web and providing new biological hypotheses.
{"title":"Why orb-weaving spiders use leg crouching behavior in vibration sensing of prey on a web: A physical mechanism from robophysical modeling.","authors":"Eugene H Lin, Yishun Zhou, Hsin-Yi Hung, Luke Moon, Andrew Gordus, Chen Li","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>One of the key functions of organisms is to sense their physical environment so that they can react upon the sensed information appropriately. All spiders can perceive their environment through vibration sensors in their legs, and most spiders rely on substrate-born vibration sensing to detect prey. Orb-weaving spiders primarily sense leg vibrations to detect and locate prey caught on their wheel-shaped webs. Biological experiments and computational modeling elucidated the physics of how these spiders use long-timescale web-building behaviors, which occur <i>before</i> prey capture, to modulate vibration sensing of prey by controlling web geometry, materials, and tension distribution. By contrast, the physics of how spiders use short-timescale leg behaviors to modulate vibration sensing on a web <i>during</i> prey capture is less known. This is in part due to challenges in biological experiments (e.g., having little control over spider behavior, difficulty measuring the whole spider-web-prey system vibrations) and theoretical/computation modeling (e.g., close-form equations intractable for a complex web, high computation cost for simulating vibrations with behaving animals). Here, we use robophysical modeling as a complementary approach to address these challenges and study how dynamic leg crouching behavior common in orb-weaving spiders contributes to vibration sensing of prey on a web. Following observations in the orb-weaver <i>Uloborus diversus</i> from a parallel biological study, we created a robophysical model consisting of a spider robot that can dynamically crouch its legs and sense its leg vibrations and a prey robot that can shake both on a horizontal physical wheel-shaped web. Without the prey robot, after each dynamic crouch, the spider robot sensed leg vibrations with only one dominant frequency-the natural frequency of itself passively vibrating on the web. With the prey robot, after each dynamic crouch, the spider robot sensed leg vibrations with two dominant frequencies-the additional higher frequency being the natural frequency of itself passively vibrating on its spiral thread induced by the spider robot's dynamic crouch. This additional frequency increased as the prey robot became closer from the web center where the spider robot was. These features allowed the spider robot to detect prey presence and distance. We developed a minimalistic physics model that decoupled the spider-web-prey system into two subsystems to explain these observations. Guided by both these results, we found evidence of the same physical mechanism appearing in the web of the <i>U. diversus</i> spider during prey capture in the data from the parallel biological study. Our work demonstrated that robophysical modeling is a useful approach for discovering physical mechanisms of how spiders use short-time scale leg behaviors to enhance vibration sensing of objects on a web and providing new biological hypotheses.</p>","PeriodicalId":93888,"journal":{"name":"ArXiv","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-12-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12755255/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145890266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mary Elena An, Paul Griffin, Jonathan G Stine, Ramakrishna Balakrishnan, Soundar Kumara
Background: Metabolic Dysfunction-Associated Steatotic Liver Disease (MASLD) affects ~33% of U.S. adults and is the most common chronic liver disease. Although often asymptomatic, progression can lead to cirrhosis. Early detection is important, as lifestyle interventions can prevent disease progression. We developed a fair, rigorous, and reproducible MASLD prediction model and compared it to prior methods using a large electronic health record database.
Methods: We evaluated LASSO logistic regression, random forest, XGBoost, and a neural network for MASLD prediction using clinical feature subsets, including the top 10 SHAP-ranked features. To reduce disparities in true positive rates across racial and ethnic subgroups, we applied an equal opportunity postprocessing method.
Results: This study included 59,492 patients in the training data, 24,198 in the validating data, and 25,188 in the testing data. The LASSO logistic regression model with the top 10 features was selected for its interpretability and comparable performance. Before fairness adjustment, the model achieved AUROC of 0.84, accuracy of 78%, sensitivity of 72%, specificity of 79%, and F1-score of 0.617. After equal opportunity postprocessing, accuracy modestly increased to 81% and specificity to 94%, while sensitivity decreased to 41% and F1-score to 0.515, reflecting the fairness trade-off.
Conclusions: We developed the MASER prediction model (MASLD Static EHR Risk Prediction), a LASSO logistic regression model which achieved competitive performance for MASLD prediction (AUROC 0.836, accuracy 77.6%), comparable to previously reported ensemble and tree-based models. Overall, this approach demonstrates that interpretable models can achieve a balance of predictive performance and fairness in diverse patient populations.
{"title":"Predicting Metabolic Dysfunction-Associated Steatotic Liver Disease using Machine Learning Methods.","authors":"Mary Elena An, Paul Griffin, Jonathan G Stine, Ramakrishna Balakrishnan, Soundar Kumara","doi":"","DOIUrl":"","url":null,"abstract":"<p><strong>Background: </strong>Metabolic Dysfunction-Associated Steatotic Liver Disease (MASLD) affects ~33% of U.S. adults and is the most common chronic liver disease. Although often asymptomatic, progression can lead to cirrhosis. Early detection is important, as lifestyle interventions can prevent disease progression. We developed a fair, rigorous, and reproducible MASLD prediction model and compared it to prior methods using a large electronic health record database.</p><p><strong>Methods: </strong>We evaluated LASSO logistic regression, random forest, XGBoost, and a neural network for MASLD prediction using clinical feature subsets, including the top 10 SHAP-ranked features. To reduce disparities in true positive rates across racial and ethnic subgroups, we applied an equal opportunity postprocessing method.</p><p><strong>Results: </strong>This study included 59,492 patients in the training data, 24,198 in the validating data, and 25,188 in the testing data. The LASSO logistic regression model with the top 10 features was selected for its interpretability and comparable performance. Before fairness adjustment, the model achieved AUROC of 0.84, accuracy of 78%, sensitivity of 72%, specificity of 79%, and F1-score of 0.617. After equal opportunity postprocessing, accuracy modestly increased to 81% and specificity to 94%, while sensitivity decreased to 41% and F1-score to 0.515, reflecting the fairness trade-off.</p><p><strong>Conclusions: </strong>We developed the MASER prediction model (MASLD Static EHR Risk Prediction), a LASSO logistic regression model which achieved competitive performance for MASLD prediction (AUROC 0.836, accuracy 77.6%), comparable to previously reported ensemble and tree-based models. Overall, this approach demonstrates that interpretable models can achieve a balance of predictive performance and fairness in diverse patient populations.</p>","PeriodicalId":93888,"journal":{"name":"ArXiv","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12755246/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145890816","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}