This study advances task-based image quality assessment by developing an anthropomorphic thresholded visual-search model observer. The model is an ideal observer for thresholded data inspired by the human visual system, allowing selective processing of high-salience features to improve discrimination performance. By filtering out irrelevant variability, the model enhances diagnostic accuracy and computational efficiency. The observer employs a two-stage framework: candidate selection and decision-making. Using thresholded data during candidate selection refines regions of interest, while stage-specific feature processing optimizes performance. Simulations were conducted to evaluate the effects of thresholding on feature maps, candidate localization, and multi-feature scenarios. Results demonstrate that thresholding improves observer performance by excluding low-salience features, particularly in noisy environments. Intermediate thresholds often outperform no thresholding, indicating that retaining only relevant features is more effective than keeping all features. Additionally, the model demonstrates effective training with fewer images while maintaining alignment with human performance. These findings suggest that the proposed novel framework can predict human visual search performance in clinically realistic tasks and provide solutions for model observer training with limited resources. Our novel approach has applications in other areas where human visual search and detection tasks are modeled such as in computer vision, machine learning, defense and security image analysis.
{"title":"Application of Ideal Observer for Thresholded Data in Search Task.","authors":"Hongwei Lin, Howard C Gifford","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>This study advances task-based image quality assessment by developing an anthropomorphic thresholded visual-search model observer. The model is an ideal observer for thresholded data inspired by the human visual system, allowing selective processing of high-salience features to improve discrimination performance. By filtering out irrelevant variability, the model enhances diagnostic accuracy and computational efficiency. The observer employs a two-stage framework: candidate selection and decision-making. Using thresholded data during candidate selection refines regions of interest, while stage-specific feature processing optimizes performance. Simulations were conducted to evaluate the effects of thresholding on feature maps, candidate localization, and multi-feature scenarios. Results demonstrate that thresholding improves observer performance by excluding low-salience features, particularly in noisy environments. Intermediate thresholds often outperform no thresholding, indicating that retaining only relevant features is more effective than keeping all features. Additionally, the model demonstrates effective training with fewer images while maintaining alignment with human performance. These findings suggest that the proposed novel framework can predict human visual search performance in clinically realistic tasks and provide solutions for model observer training with limited resources. Our novel approach has applications in other areas where human visual search and detection tasks are modeled such as in computer vision, machine learning, defense and security image analysis.</p>","PeriodicalId":93888,"journal":{"name":"ArXiv","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12869411/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146127836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We address the problem of inferring the location of a target that releases odor in the presence of turbulence. Input for the inference is provided by many sensors scattered within the odor plume. Drawing inspiration from distributed chemosensation in biology, we ask whether the accuracy of the inference is affected by proprioceptive noise, i.e., noise on the perceived location of the sensors. Surprisingly, in the presence of a net fluid flow, proprioceptive noise improves Bayesian inference, rather than degrading it. An optimal noise exists that efficiently leverages additional information hidden within the geometry of the odor plume. Empirical tuning of noise functions well across a range of distances and may be implemented in practice. Other sources of noise also improve accuracy, owing to their ability to break the spatiotemporal correlations of the turbulent plume. These counterintuitive benefits of noise may be leveraged to improve sensory processing in biology and robotics.
{"title":"Noise enhances odor source localization.","authors":"Francesco Marcolli, Martin James, Agnese Seminara","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>We address the problem of inferring the location of a target that releases odor in the presence of turbulence. Input for the inference is provided by many sensors scattered within the odor plume. Drawing inspiration from distributed chemosensation in biology, we ask whether the accuracy of the inference is affected by proprioceptive noise, i.e., noise on the perceived location of the sensors. Surprisingly, in the presence of a net fluid flow, proprioceptive noise improves Bayesian inference, rather than degrading it. An optimal noise exists that efficiently leverages additional information hidden within the geometry of the odor plume. Empirical tuning of noise functions well across a range of distances and may be implemented in practice. Other sources of noise also improve accuracy, owing to their ability to break the spatiotemporal correlations of the turbulent plume. These counterintuitive benefits of noise may be leveraged to improve sensory processing in biology and robotics.</p>","PeriodicalId":93888,"journal":{"name":"ArXiv","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12869396/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146127862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ninghui Hao, Xinxing Yang, Boshen Yan, Dong Li, Junzhou Huang, Xintao Wu, Emily S Ruiz, Arlene Ruiz de Luzuriaga, Chen Zhao, Guihong Wan
Spatial omics (SO) technologies enable spatially resolved molecular profiling, while hematoxylin and eosin (H&E) imaging remains the gold standard for morphological assessment in clinical pathology. Recent computational advances increasingly place H&E images at the center of SO analysis, bridging morphology with transcriptomic, proteomic, and other spatial molecular modalities, and pushing resolution toward the single-cell level. In this survey, we systematically review the computational evolution of SO from a histopathology-centered perspective and organize existing methods into three paradigms: integration, which jointly models paired multimodal data; mapping, which infers molecular profiles from H&E images; and foundation models, which learn generalizable representations from large-scale spatial datasets. We analyze how the role of H&E images evolves across these paradigms from spatial context to predictive anchor and ultimately to representation backbone in response to practical constraints such as limited paired data and increasing resolution demands. We further summarize actionable modeling directions enabled by current architectures and delineate persistent gaps driven by data, biology, and technology that are unlikely to be resolved by model design alone. Together, this survey provides a histopathology-centered roadmap for developing and applying computational frameworks in SO.
{"title":"Histopathology-centered Computational Evolution of Spatial Omics: Integration, Mapping, and Foundation Models.","authors":"Ninghui Hao, Xinxing Yang, Boshen Yan, Dong Li, Junzhou Huang, Xintao Wu, Emily S Ruiz, Arlene Ruiz de Luzuriaga, Chen Zhao, Guihong Wan","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Spatial omics (SO) technologies enable spatially resolved molecular profiling, while hematoxylin and eosin (H&E) imaging remains the gold standard for morphological assessment in clinical pathology. Recent computational advances increasingly place H&E images at the center of SO analysis, bridging morphology with transcriptomic, proteomic, and other spatial molecular modalities, and pushing resolution toward the single-cell level. In this survey, we systematically review the computational evolution of SO from a histopathology-centered perspective and organize existing methods into three paradigms: integration, which jointly models paired multimodal data; mapping, which infers molecular profiles from H&E images; and foundation models, which learn generalizable representations from large-scale spatial datasets. We analyze how the role of H&E images evolves across these paradigms from spatial context to predictive anchor and ultimately to representation backbone in response to practical constraints such as limited paired data and increasing resolution demands. We further summarize actionable modeling directions enabled by current architectures and delineate persistent gaps driven by data, biology, and technology that are unlikely to be resolved by model design alone. Together, this survey provides a histopathology-centered roadmap for developing and applying computational frameworks in SO.</p>","PeriodicalId":93888,"journal":{"name":"ArXiv","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12869385/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146127849","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Johanna M M Bayer, Augustijn A A de Boer, Barbora Rehák-Bučková, Charlotte J Fraza, Tobias Banaschewski, Gareth J Barker, Arun L W Bokde, Rüdiger Brühl, Sylvane Desrivières, Herta Flor, Hugh Garavan, Penny Gowland, Antoine Grigis, Andreas Heinz, Herve Lemaitre, Jean-Luc Martinot, Marie-Laure Paillère Martinot, Eric Artigues, Frauke Nees, Dimitri Papadopoulos Orfanos, Tomáš Paus, Luise Poustka, Michael N Smolka, Nathalie Holz, Nilakshi Vaidya, Henrik Walter, Robert Whelan, Paul Wirsching, Gunter Schumann, Nina Kraguljac, Christian F Beckmann, Andre F Marquand
Brain charts have emerged as a highly useful approach for understanding brain development and aging on the basis of brain imaging and have shown substantial utility in describing typical and atypical brain development with respect to a given reference model. However, all existing models are fundamentally cross-sectional and cannot capture change over time at the individual level. We address this using velocity centiles, which directly map change over time and can be overlaid onto cross-sectionally derived population centiles. We demonstrate this by modelling rates of change for 24,062 scans from 10,795 healthy individuals with up to 8 longitudinal measurements across the lifespan. We provide a method to detect individual deviations from a stable trajectory, generalising the notion of 'thrive lines', which are used in pediatric medicine to declare 'failure to thrive'. Using this approach, we predict transition from mild cognitive impairment to dementia more accurately than by using either time point alone, replicated across two datasets. Last, by taking into account multiple time points, we improve the sensitivity of velocity models for predicting the future trajectory of brain change. This highlights the value of predicting change over time and makes a fundamental step towards precision medicine.
{"title":"Charting the velocity of brain growth and development.","authors":"Johanna M M Bayer, Augustijn A A de Boer, Barbora Rehák-Bučková, Charlotte J Fraza, Tobias Banaschewski, Gareth J Barker, Arun L W Bokde, Rüdiger Brühl, Sylvane Desrivières, Herta Flor, Hugh Garavan, Penny Gowland, Antoine Grigis, Andreas Heinz, Herve Lemaitre, Jean-Luc Martinot, Marie-Laure Paillère Martinot, Eric Artigues, Frauke Nees, Dimitri Papadopoulos Orfanos, Tomáš Paus, Luise Poustka, Michael N Smolka, Nathalie Holz, Nilakshi Vaidya, Henrik Walter, Robert Whelan, Paul Wirsching, Gunter Schumann, Nina Kraguljac, Christian F Beckmann, Andre F Marquand","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Brain charts have emerged as a highly useful approach for understanding brain development and aging on the basis of brain imaging and have shown substantial utility in describing typical and atypical brain development with respect to a given reference model. However, all existing models are fundamentally cross-sectional and cannot capture change over time at the individual level. We address this using velocity centiles, which directly map change over time and can be overlaid onto cross-sectionally derived population centiles. We demonstrate this by modelling rates of change for 24,062 scans from 10,795 healthy individuals with up to 8 longitudinal measurements across the lifespan. We provide a method to detect individual deviations from a stable trajectory, generalising the notion of 'thrive lines', which are used in pediatric medicine to declare 'failure to thrive'. Using this approach, we predict transition from mild cognitive impairment to dementia more accurately than by using either time point alone, replicated across two datasets. Last, by taking into account multiple time points, we improve the sensitivity of velocity models for predicting the future trajectory of brain change. This highlights the value of predicting change over time and makes a fundamental step towards precision medicine.</p>","PeriodicalId":93888,"journal":{"name":"ArXiv","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12869400/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146127842","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This report summarizes changes in federal research funding to the medical physics community between FY24 and FY25. By linking the AAPM membership database with NIH RePORTER records, we quantified the distribution of NIH funding for projects led by AAPM researchers. Although total NIH funding to AAPM members remained relatively stable across the two years, the composition of that funding shifted substantially. Competing (new and renewal) awards declined 50%, driven largely by an 80% collapse in new R01 grants from the National Cancer Institute (NCI). In contrast, noncompeting continuation awards increased by 10%, following a shift in how NIH funds multi-year projects. These changes occurred in the context of widespread disruptions to NIH review and grantmaking, including delayed study sections and more stringent administrative requirements. Federal funding is essential to sustaining innovation, supporting early-stage investigators, and ensuring that patients receive the best possible care. The trends identified here raise concerns about the long-term vitality and stability of the medical physics research pipeline.
{"title":"Critical Shortfall in NIH Support for Medical Physics Research.","authors":"Guillem Pratx, Wensha Yang, Matthew L Scarpelli","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>This report summarizes changes in federal research funding to the medical physics community between FY24 and FY25. By linking the AAPM membership database with NIH RePORTER records, we quantified the distribution of NIH funding for projects led by AAPM researchers. Although total NIH funding to AAPM members remained relatively stable across the two years, the composition of that funding shifted substantially. Competing (new and renewal) awards declined 50%, driven largely by an 80% collapse in new R01 grants from the National Cancer Institute (NCI). In contrast, noncompeting continuation awards increased by 10%, following a shift in how NIH funds multi-year projects. These changes occurred in the context of widespread disruptions to NIH review and grantmaking, including delayed study sections and more stringent administrative requirements. Federal funding is essential to sustaining innovation, supporting early-stage investigators, and ensuring that patients receive the best possible care. The trends identified here raise concerns about the long-term vitality and stability of the medical physics research pipeline.</p>","PeriodicalId":93888,"journal":{"name":"ArXiv","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12869383/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146127937","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hongwei Lin, Diego Andrade, Mini Das, Howard C Gifford
Understanding human visual search behavior is a fundamental problem in vision science and computer vision, with direct implications for modeling how observers allocate attention in location-unknown search tasks. In this study, we investigate the relationship between Gabor-based features and gray-level co-occurrence matrix (GLCM)-based texture features in modeling early-stage visual search behavior. Two feature-combination pipelines are proposed to integrate Gabor and GLCM features for narrowing the region of possible human fixations. The pipelines are evaluated using simulated digital breast tomosynthesis images. Results show qualitative agreement among fixation candidates predicted by the proposed pipelines and a threshold-based model observer. A strong correlation ( ) is observed between GLCM mean and Gabor feature responses, indicating that these features encode related image information despite their different formulations. Eye-tracking data from human observers further suggest consistency between predicted fixation regions and early-stage gaze behavior. These findings highlight the value of combining structural and texture-based features for modeling visual search and support the development of perceptually informed observer models.
{"title":"Predicting Region of Interest in Human Visual Search Based on Statistical Texture and Gabor Features.","authors":"Hongwei Lin, Diego Andrade, Mini Das, Howard C Gifford","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Understanding human visual search behavior is a fundamental problem in vision science and computer vision, with direct implications for modeling how observers allocate attention in location-unknown search tasks. In this study, we investigate the relationship between Gabor-based features and gray-level co-occurrence matrix (GLCM)-based texture features in modeling early-stage visual search behavior. Two feature-combination pipelines are proposed to integrate Gabor and GLCM features for narrowing the region of possible human fixations. The pipelines are evaluated using simulated digital breast tomosynthesis images. Results show qualitative agreement among fixation candidates predicted by the proposed pipelines and a threshold-based model observer. A strong correlation ( <math><mi>r</mi> <mo>=</mo> <mn>0.765</mn></math> ) is observed between GLCM mean and Gabor feature responses, indicating that these features encode related image information despite their different formulations. Eye-tracking data from human observers further suggest consistency between predicted fixation regions and early-stage gaze behavior. These findings highlight the value of combining structural and texture-based features for modeling visual search and support the development of perceptually informed observer models.</p>","PeriodicalId":93888,"journal":{"name":"ArXiv","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12869375/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146127796","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
E Middell, L Carlton, S Moradi, T Codina, T Fischer, J Cutler, S Kelley, J Behrendt, T Dissanayake, N Harmening, M A Yücel, D A Boas, A von Lühmann
Significance: Functional near-infrared spectroscopy (fNIRS) and diffuse optical tomography (DOT) are rapidly evolving toward wearable, multimodal, and data-driven, AI-supported neuroimaging in the everyday world. However, current analytical tools are fragmented across platforms, limiting reproducibility, interoperability, and integration with modern machine learning (ML) workflows.
Aim: Cedalion is a Python-based open-source framework designed to unify advanced model-based and data-driven analysis of multimodal fNIRS and DOT data within a reproducible, extensible, and community-driven environment.
Approach: Cedalion integrates forward modelling, photogrammetric optode co-registration, signal processing, GLM Analysis, DOT image reconstruction, and ML-based data-driven methods within a single standardized architecture based on the Python ecosystem. It adheres to SNIRF and BIDS standards, supports cloud-executable Jupyter notebooks, and provides containerized workflows for scalable, fully reproducible analysis pipelines that can be provided alongside original research publications.
Results: Cedalion connects established optical-neuroimaging pipelines with ML frameworks such as scikit-learn and PyTorch, enabling seamless multimodal fusion with EEG, MEG, and physiological data. It implements validated algorithms for signal-quality assessment, motion correction, GLM modelling, and DOT reconstruction, complemented by modules for simulation, data augmentation, and multimodal physiology analysis. Automated documentation links each method to its source publication, and continuous-integration testing ensures robustness. This tutorial paper provides seven fully executable notebooks that demonstrate core features.
Conclusions: Cedalion offers an open, transparent, and community extensible foundation that supports reproducible, scalable, cloud- and ML-ready fNIRS/DOT workflows for laboratory-based and real-world neuroimaging.
{"title":"Cedalion Tutorial: A Python-based framework for comprehensive analysis of multimodal fNIRS & DOT from the lab to the everyday world.","authors":"E Middell, L Carlton, S Moradi, T Codina, T Fischer, J Cutler, S Kelley, J Behrendt, T Dissanayake, N Harmening, M A Yücel, D A Boas, A von Lühmann","doi":"","DOIUrl":"","url":null,"abstract":"<p><strong>Significance: </strong>Functional near-infrared spectroscopy (fNIRS) and diffuse optical tomography (DOT) are rapidly evolving toward wearable, multimodal, and data-driven, AI-supported neuroimaging in the everyday world. However, current analytical tools are fragmented across platforms, limiting reproducibility, interoperability, and integration with modern machine learning (ML) workflows.</p><p><strong>Aim: </strong><i>Cedalion</i> is a Python-based open-source framework designed to unify advanced model-based and data-driven analysis of multimodal fNIRS and DOT data within a reproducible, extensible, and community-driven environment.</p><p><strong>Approach: </strong>Cedalion integrates forward modelling, photogrammetric optode co-registration, signal processing, GLM Analysis, DOT image reconstruction, and ML-based data-driven methods within a single standardized architecture based on the Python ecosystem. It adheres to SNIRF and BIDS standards, supports cloud-executable Jupyter notebooks, and provides containerized workflows for scalable, fully reproducible analysis pipelines that can be provided alongside original research publications.</p><p><strong>Results: </strong>Cedalion connects established optical-neuroimaging pipelines with ML frameworks such as scikit-learn and PyTorch, enabling seamless multimodal fusion with EEG, MEG, and physiological data. It implements validated algorithms for signal-quality assessment, motion correction, GLM modelling, and DOT reconstruction, complemented by modules for simulation, data augmentation, and multimodal physiology analysis. Automated documentation links each method to its source publication, and continuous-integration testing ensures robustness. This tutorial paper provides seven fully executable notebooks that demonstrate core features.</p><p><strong>Conclusions: </strong>Cedalion offers an open, transparent, and community extensible foundation that supports reproducible, scalable, cloud- and ML-ready fNIRS/DOT workflows for laboratory-based and real-world neuroimaging.</p>","PeriodicalId":93888,"journal":{"name":"ArXiv","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12803311/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145992215","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A protein's function depends critically on its conformational ensemble, a collection of energy weighted structures whose balance depends on temperature and environment. Though recent deep learning (DL) methods have substantially advanced predictions of single protein structures, computationally modeling conformational ensembles remains a challenge. Here, we focus on modeling fold-switching proteins, which remodel their secondary and/or tertiary structures and change their functions in response to cellular stimuli. These underrepresented members of the protein universe serve as test cases for a method's generalizability. They reveal that DL models often predict conformational ensembles by association with training-set structures, limiting generalizability. These observations suggest use cases for when DL methods will likely succeed or fail. Developing computational methods that successfully identify new fold-switching proteins from large pools of candidates may advance modeling conformational ensembles more broadly.
{"title":"Fold-switching proteins push the boundaries of conformational ensemble prediction.","authors":"Myeongsang Lee, Lauren L Porter","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>A protein's function depends critically on its conformational ensemble, a collection of energy weighted structures whose balance depends on temperature and environment. Though recent deep learning (DL) methods have substantially advanced predictions of single protein structures, computationally modeling conformational ensembles remains a challenge. Here, we focus on modeling fold-switching proteins, which remodel their secondary and/or tertiary structures and change their functions in response to cellular stimuli. These underrepresented members of the protein universe serve as test cases for a method's generalizability. They reveal that DL models often predict conformational ensembles by association with training-set structures, limiting generalizability. These observations suggest use cases for when DL methods will likely succeed or fail. Developing computational methods that successfully identify new fold-switching proteins from large pools of candidates may advance modeling conformational ensembles more broadly.</p>","PeriodicalId":93888,"journal":{"name":"ArXiv","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12803323/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145992142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jalaj Maheshwari, Wensi Wu, Christopher N Zelonis, Steve A Maas, Kyle Sunderland, Yuval Barak-Corren, Stephen Ching, Patricia Sabin, Andras Lasso, Matthew J Gillespie, Jeffrey A Weiss, Matthew A Jolley
Finite element (FE) simulations emulating transcatheter pulmonary valve (TPV) system deployment in patient-specific right ventricular outflow tracts (RVOT) assume material properties for the RVOT and adjacent tissues. Sensitivity of the deployment to variation in RVOT material properties is unknown. Moreover, the effect of a transannular patch stiffness and location on simulated TPV deployment has not been explored. A sensitivity analysis on the material properties of a patient-specific RVOT during TPV deployment, modeled as an uncoupled HGO material, was conducted using FEBioUncertainSCI. Further, the effects of a transannular patch during TPV deployment were analyzed by considering two patch locations and four patch stiffnesses. Visualization of results and quantification were performed using custom metrics implemented in SlicerHeart and FEBio. Sensitivity analysis revealed that the shear modulus of the ground matrix , fiber modulus , and fiber mean orientation angle had the greatest effect on 95th %ile stress, whereas only had the greatest effect on 95th %ile Lagrangian strain. First-order sensitivity indices contributed the greatest to the total-order sensitivity indices. Simulations using a transannular patch revealed that peak stress and strain were dependent on patch location. As stiffness of the patch increased, greater stress was observed at the interface connecting the patch to the RVOT, and stress in the patch itself increased while strain decreased. The total enclosed volume by the TPV device remained unchanged across all simulated patch cases. This study highlights that while uncertainties in tissue material properties and patch locations may influence functional outcomes, FE simulations provide a reliable framework for evaluating these outcomes in TPVR.
{"title":"Effect of Right Ventricular Outflow Tract Material Properties on Simulated Transcatheter Pulmonary Placement.","authors":"Jalaj Maheshwari, Wensi Wu, Christopher N Zelonis, Steve A Maas, Kyle Sunderland, Yuval Barak-Corren, Stephen Ching, Patricia Sabin, Andras Lasso, Matthew J Gillespie, Jeffrey A Weiss, Matthew A Jolley","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Finite element (FE) simulations emulating transcatheter pulmonary valve (TPV) system deployment in patient-specific right ventricular outflow tracts (RVOT) assume material properties for the RVOT and adjacent tissues. Sensitivity of the deployment to variation in RVOT material properties is unknown. Moreover, the effect of a transannular patch stiffness and location on simulated TPV deployment has not been explored. A sensitivity analysis on the material properties of a patient-specific RVOT during TPV deployment, modeled as an uncoupled HGO material, was conducted using FEBioUncertainSCI. Further, the effects of a transannular patch during TPV deployment were analyzed by considering two patch locations and four patch stiffnesses. Visualization of results and quantification were performed using custom metrics implemented in SlicerHeart and FEBio. Sensitivity analysis revealed that the shear modulus of the ground matrix <math><mrow><mo>(</mo> <mi>c</mi> <mo>)</mo></mrow> </math> , fiber modulus <math> <mrow> <mfenced> <mrow><msub><mi>k</mi> <mn>1</mn></msub> </mrow> </mfenced> </mrow> </math> , and fiber mean orientation angle <math><mrow><mo>(</mo> <mi>γ</mi> <mo>)</mo></mrow> </math> had the greatest effect on 95th %ile stress, whereas only <math><mi>c</mi></math> had the greatest effect on 95th %ile Lagrangian strain. First-order sensitivity indices contributed the greatest to the total-order sensitivity indices. Simulations using a transannular patch revealed that peak stress and strain were dependent on patch location. As stiffness of the patch increased, greater stress was observed at the interface connecting the patch to the RVOT, and stress in the patch itself increased while strain decreased. The total enclosed volume by the TPV device remained unchanged across all simulated patch cases. This study highlights that while uncertainties in tissue material properties and patch locations may influence functional outcomes, FE simulations provide a reliable framework for evaluating these outcomes in TPVR.</p>","PeriodicalId":93888,"journal":{"name":"ArXiv","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12803331/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145992190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Computer simulations of complex population genetic models are an essential tool for making sense of the large-scale datasets of multiple genome sequences from a single species that are becoming increasingly available. A widely used approach for reducing computing time is to simulate populations that are much smaller than the natural populations that they are intended to represent, by using parameters such as selection coefficients and mutation rates, whose products with the population size correspond to those of the natural populations. This approach has come to be known as rescaling, and is justified by the theory of the genetics of finite populations. Recently, however, there have been criticisms of this practice, which have brought to light situations in which it can lead to erroneous conclusions. This paper reviews the theoretical basis for rescaling, and relates it to current practice in population genetics simulations. It shows that some population genetic statistics are scaleable while others are not. Additionally, it shows that there are likely to be problems with rescaling when simulating large chromosomal regions, due to the non-linear relation between the physical distance between a pair of separate nucleotide sites and the frequency of recombination between them. Other difficulties with rescaling can arise in connection with simulations of selection on complex traits, and with populations that reproduce partly by self-fertilization or asexual reproduction. A number of recommendations are made for good practice in relation to rescaling.
{"title":"The rights and wrongs of rescaling in population genetics simulations.","authors":"Parul Johri, Fanny Pouyet, Brian Charlesworth","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Computer simulations of complex population genetic models are an essential tool for making sense of the large-scale datasets of multiple genome sequences from a single species that are becoming increasingly available. A widely used approach for reducing computing time is to simulate populations that are much smaller than the natural populations that they are intended to represent, by using parameters such as selection coefficients and mutation rates, whose products with the population size correspond to those of the natural populations. This approach has come to be known as rescaling, and is justified by the theory of the genetics of finite populations. Recently, however, there have been criticisms of this practice, which have brought to light situations in which it can lead to erroneous conclusions. This paper reviews the theoretical basis for rescaling, and relates it to current practice in population genetics simulations. It shows that some population genetic statistics are scaleable while others are not. Additionally, it shows that there are likely to be problems with rescaling when simulating large chromosomal regions, due to the non-linear relation between the physical distance between a pair of separate nucleotide sites and the frequency of recombination between them. Other difficulties with rescaling can arise in connection with simulations of selection on complex traits, and with populations that reproduce partly by self-fertilization or asexual reproduction. A number of recommendations are made for good practice in relation to rescaling.</p>","PeriodicalId":93888,"journal":{"name":"ArXiv","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12803318/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145992137","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}