Guantong Qi, Jiasheng Wang, Mei Ling Chong, Zahid Shaik, Shenglan Li, Shinya Yamamoto, Undiagnosed Diseases Network, Pengfei Liu, Hu Chen, Zhandong Liu
Millions of children worldwide are affected by severe rare Mendelian disorders, yet exome and genome sequencing still fail to provide a definitive molecular diagnosis for a large fraction of patients, prolonging the diagnostic odyssey. Bridging this gap increasingly requires transitioning from DNA-only interpretation to multi-modal diagnostic reasoning that combines genomic data, transcriptomic sequencing (RNA-seq), and phenotype information; however, computational frameworks that coherently integrate these signals remain limited. Here we present RareCollab, an agentic diagnostic framework that pairs a stable quantitative Diagnostic Engine with Large Language Model (LLM)-based specialist modules that produce high-resolution, interpretable assessments from transcriptomic signals, phenotypes, variant databases, and the literature to prioritize potential diagnostic variants. In a rigorously curated benchmark of Undiagnosed Diseases Network (UDN) patients with paired genomic and transcriptomic data, RareCollab achieved 77% top-5 diagnostic accuracy and improved top-1 to top-5 accuracy by ~20% over widely used variant-prioritization approaches. RareCollab illustrates how modular artificial intelligence (AI) can operationalize multi-modal evidence for accurate, scalable rare disease diagnosis, offering a promising path toward reducing the diagnostic odyssey for affected families.
{"title":"RareCollab -- An Agentic System Diagnosing Mendelian Disorders with Integrated Phenotypic and Molecular Evidence.","authors":"Guantong Qi, Jiasheng Wang, Mei Ling Chong, Zahid Shaik, Shenglan Li, Shinya Yamamoto, Undiagnosed Diseases Network, Pengfei Liu, Hu Chen, Zhandong Liu","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Millions of children worldwide are affected by severe rare Mendelian disorders, yet exome and genome sequencing still fail to provide a definitive molecular diagnosis for a large fraction of patients, prolonging the diagnostic odyssey. Bridging this gap increasingly requires transitioning from DNA-only interpretation to multi-modal diagnostic reasoning that combines genomic data, transcriptomic sequencing (RNA-seq), and phenotype information; however, computational frameworks that coherently integrate these signals remain limited. Here we present RareCollab, an agentic diagnostic framework that pairs a stable quantitative Diagnostic Engine with Large Language Model (LLM)-based specialist modules that produce high-resolution, interpretable assessments from transcriptomic signals, phenotypes, variant databases, and the literature to prioritize potential diagnostic variants. In a rigorously curated benchmark of Undiagnosed Diseases Network (UDN) patients with paired genomic and transcriptomic data, RareCollab achieved 77% top-5 diagnostic accuracy and improved top-1 to top-5 accuracy by ~20% over widely used variant-prioritization approaches. RareCollab illustrates how modular artificial intelligence (AI) can operationalize multi-modal evidence for accurate, scalable rare disease diagnosis, offering a promising path toward reducing the diagnostic odyssey for affected families.</p>","PeriodicalId":93888,"journal":{"name":"ArXiv","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12889852/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146168295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Many modern ultrasound beamformers report improved image quality when evaluated using classical criteria like the contrast ratio and contrast-to-noise ratio, which are based on summary statistics of regions of interest (ROIs). However, nonlinear beamformers and post-processing methods can substantially alter these statistics, raising concerns that the reported improvements may reflect changes in dynamic range or remapping rather than a reflection of true information gain, such as clutter suppression. New criteria like the generalized contrast-to-noise ratio (gCNR) address these concerns, but rely on noisy estimates of the underlying distribution. To address this, we introduce a new image quality criterion, called the contrast order (CO), defined as the expected value of the sign of the difference in brightness between two ROIs. The CO is invariant under all strictly monotonic transformations of the image values, as it depends only on their relative ordering, and is interpretable as the probability that one ROI is brighter than the other minus the probability that it is darker. Unlike the gCNR, the CO has a simple unbiased estimator whose variance decreases with the number of samples in each ROI. We further propose the effective contrast ratio (ECR), which calibrates the contrast order to the familiar contrast ratio such that the two coincide under ideal Rayleigh-speckle statistics. Together, the CO and ECR provide order- and sign-preserving, dynamic-range-invariant criteria for evaluating lesion contrast, offering a principled alternative to classical and newer image quality criteria when assessing modern beamformers.
{"title":"The Contrast Order: An Order-Based Image Quality Criterion for Nonlinear Beamformers.","authors":"Dongwoon Hyun","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Many modern ultrasound beamformers report improved image quality when evaluated using classical criteria like the contrast ratio and contrast-to-noise ratio, which are based on summary statistics of regions of interest (ROIs). However, nonlinear beamformers and post-processing methods can substantially alter these statistics, raising concerns that the reported improvements may reflect changes in dynamic range or remapping rather than a reflection of true information gain, such as clutter suppression. New criteria like the generalized contrast-to-noise ratio (gCNR) address these concerns, but rely on noisy estimates of the underlying distribution. To address this, we introduce a new image quality criterion, called the contrast order (CO), defined as the expected value of the sign of the difference in brightness between two ROIs. The CO is invariant under all strictly monotonic transformations of the image values, as it depends only on their relative ordering, and is interpretable as the probability that one ROI is brighter than the other minus the probability that it is darker. Unlike the gCNR, the CO has a simple unbiased estimator whose variance decreases with the number of samples in each ROI. We further propose the effective contrast ratio (ECR), which calibrates the contrast order to the familiar contrast ratio such that the two coincide under ideal Rayleigh-speckle statistics. Together, the CO and ECR provide order- and sign-preserving, dynamic-range-invariant criteria for evaluating lesion contrast, offering a principled alternative to classical and newer image quality criteria when assessing modern beamformers.</p>","PeriodicalId":93888,"journal":{"name":"ArXiv","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12889846/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146168306","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jagan Mohan Reddy Dwarampudi, Jennifer L Purks, Joshua Wong, Renjie Hu, Tania Banerjee
We introduce a reproducible, bias-resistant machine learning framework that integrates domain-informed feature engineering, nested cross-validation, and calibrated decision-threshold optimization for small-sample neuroimaging data. Conventional cross-validation frameworks that reuse the same folds for both model selection and performance estimation yield optimistically biased results, limiting reproducibility and generalization. Demonstrated on a high-dimensional structural MRI dataset of deep brain stimulation cognitive outcomes, the framework achieved a nested-CV balanced accuracy of 0.660,$pm$,0.068 using a compact, interpretable subset selected via importance-guided ranking. By combining interpretability and unbiased evaluation, this work provides a generalizable computational blueprint for reliable machine learning in data-limited biomedical domains.
{"title":"A Reproducible Framework for Bias-Resistant Machine Learning on Small-Sample Neuroimaging Data.","authors":"Jagan Mohan Reddy Dwarampudi, Jennifer L Purks, Joshua Wong, Renjie Hu, Tania Banerjee","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>We introduce a reproducible, bias-resistant machine learning framework that integrates domain-informed feature engineering, nested cross-validation, and calibrated decision-threshold optimization for small-sample neuroimaging data. Conventional cross-validation frameworks that reuse the same folds for both model selection and performance estimation yield optimistically biased results, limiting reproducibility and generalization. Demonstrated on a high-dimensional structural MRI dataset of deep brain stimulation cognitive outcomes, the framework achieved a nested-CV balanced accuracy of 0.660,$pm$,0.068 using a compact, interpretable subset selected via importance-guided ranking. By combining interpretability and unbiased evaluation, this work provides a generalizable computational blueprint for reliable machine learning in data-limited biomedical domains.</p>","PeriodicalId":93888,"journal":{"name":"ArXiv","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12889860/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146168357","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
T Anderson Keller, Lyle Muller, Terrence J Sejnowski, Max Welling
Spatiotemporal flows of neural activity, such as traveling waves, have been observed throughout the brain since the earliest recordings; yet there is still little consensus on their functional role. Recent experiments and models have linked traveling waves to visual and physical motion, but these observations have been difficult to reconcile with standard accounts of topographically organized selectivity and feedforward receptive fields. Here, we introduce a theoretical framework that formalizes and generalizes the connection between 'motion' and flowing neural dynamics in the language of equivariant neural network theory. We consider 'motion' not only in physical or visual spaces, but also in more abstract representational spaces, and we argue that recurrent traveling-wave-like dynamics are not just useful but necessary for accurate and stable processing of any signal undergoing such motion. Formally, we show that for any non-trivial recurrent neural network to process a sequence undergoing a flow transformation (such as visual motion) in a structured equivariant manner, its hidden state dynamics must actively realize a homomorphic representation of the same flow through recurrent connectivity. In this "spatiotemporal perspective on dynamical computation", traveling waves and related flows are best understood as faithful dynamic representations of stimulus flows; and consequently the natural inclination of biological systems towards such dynamics may be viewed as an innate inductive bias towards efficiency and generalization in the spatiotemporally-structured dynamical world they inhabit.
{"title":"A Spatiotemporal Perspective on Dynamical Computation in Neural Information Processing Systems.","authors":"T Anderson Keller, Lyle Muller, Terrence J Sejnowski, Max Welling","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Spatiotemporal flows of neural activity, such as traveling waves, have been observed throughout the brain since the earliest recordings; yet there is still little consensus on their functional role. Recent experiments and models have linked traveling waves to visual and physical motion, but these observations have been difficult to reconcile with standard accounts of topographically organized selectivity and feedforward receptive fields. Here, we introduce a theoretical framework that formalizes and generalizes the connection between 'motion' and flowing neural dynamics in the language of equivariant neural network theory. We consider 'motion' not only in physical or visual spaces, but also in more abstract representational spaces, and we argue that recurrent traveling-wave-like dynamics are not just useful but necessary for accurate and stable processing of any signal undergoing such motion. Formally, we show that for any non-trivial recurrent neural network to process a sequence undergoing a flow transformation (such as visual motion) in a structured equivariant manner, its hidden state dynamics must actively realize a homomorphic representation of the same flow through recurrent connectivity. In this \"spatiotemporal perspective on dynamical computation\", traveling waves and related flows are best understood as faithful dynamic representations of stimulus flows; and consequently the natural inclination of biological systems towards such dynamics may be viewed as an innate inductive bias towards efficiency and generalization in the spatiotemporally-structured dynamical world they inhabit.</p>","PeriodicalId":93888,"journal":{"name":"ArXiv","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12889856/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146168274","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jiayu Su, Jun Hou Fung, Haoyu Wang, Dian Yang, David A Knowles, Raul Rabadan
Detecting spatial patterns is fundamental to scientific discovery, yet current methods lack statistical consensus and face computational barriers when applied to large-scale spatial omics datasets. We unify major approaches through a single quadratic form and derive general consistency conditions. We reveal that several widely used methods, including Moran's I, are inconsistent, and propose scalable corrections. The resulting test enables robust pattern detection across millions of spatial locations and single-cell lineage-tracing datasets.
{"title":"On the consistent and scalable detection of spatial patterns.","authors":"Jiayu Su, Jun Hou Fung, Haoyu Wang, Dian Yang, David A Knowles, Raul Rabadan","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Detecting spatial patterns is fundamental to scientific discovery, yet current methods lack statistical consensus and face computational barriers when applied to large-scale spatial omics datasets. We unify major approaches through a single quadratic form and derive general consistency conditions. We reveal that several widely used methods, including Moran's I, are inconsistent, and propose scalable corrections. The resulting test enables robust pattern detection across millions of spatial locations and single-cell lineage-tracing datasets.</p>","PeriodicalId":93888,"journal":{"name":"ArXiv","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12889859/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146168327","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jagan Mohan Reddy Dwarampudi, Joshua Wong, Hien Van Nguyen, Tania Banerjee
We introduce Multi-scale Adaptive Recurrent Biomedical Linear-time Encoder (MARBLE), the first textit{purely Mamba-based} multi-state multiple instance learning (MIL) framework for whole-slide image (WSI) analysis. MARBLE processes multiple magnification levels in parallel and integrates coarse-to-fine reasoning within a linear-time state-space model, efficiently capturing cross-scale dependencies with minimal parameter overhead. WSI analysis remains challenging due to gigapixel resolutions and hierarchical magnifications, while existing MIL methods typically operate at a single scale and transformer-based approaches suffer from quadratic attention costs. By coupling parallel multi-scale processing with linear-time sequence modeling, MARBLE provides a scalable and modular alternative to attention-based architectures. Experiments on five public datasets show improvements of up to textbf{6.9%} in AUC, textbf{20.3%} in accuracy, and textbf{2.3%} in C-index, establishing MARBLE as an efficient and generalizable framework for multi-scale WSI analysis.
{"title":"A Multi-scale Linear-time Encoder for Whole-Slide Image Analysis.","authors":"Jagan Mohan Reddy Dwarampudi, Joshua Wong, Hien Van Nguyen, Tania Banerjee","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>We introduce Multi-scale Adaptive Recurrent Biomedical Linear-time Encoder (MARBLE), the first textit{purely Mamba-based} multi-state multiple instance learning (MIL) framework for whole-slide image (WSI) analysis. MARBLE processes multiple magnification levels in parallel and integrates coarse-to-fine reasoning within a linear-time state-space model, efficiently capturing cross-scale dependencies with minimal parameter overhead. WSI analysis remains challenging due to gigapixel resolutions and hierarchical magnifications, while existing MIL methods typically operate at a single scale and transformer-based approaches suffer from quadratic attention costs. By coupling parallel multi-scale processing with linear-time sequence modeling, MARBLE provides a scalable and modular alternative to attention-based architectures. Experiments on five public datasets show improvements of up to textbf{6.9%} in AUC, textbf{20.3%} in accuracy, and textbf{2.3%} in C-index, establishing MARBLE as an efficient and generalizable framework for multi-scale WSI analysis.</p>","PeriodicalId":93888,"journal":{"name":"ArXiv","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12889854/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146168341","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
High-resolution spatial transcriptomics platforms, such as Xenium, generate single-cell images that capture both molecular and spatial context, but their extremely high dimensionality poses major challenges for representation learning and clustering. In this study, we analyze data from the Xenium platform, which captures high-resolution images of tumor microarray (TMA) tissues and converts them into cell-by-gene matrices suitable for computational analysis. We benchmark and extend nonnegative matrix factorization (NMF) for spatial transcriptomics by introducing two spatially regularized variants. First, we propose Spatial NMF (SNMF), a lightweight baseline that enforces local spatial smoothness by diffusing each cell's NMF factor vector over its spatial neighborhood. Second, we introduce Hybrid Spatial NMF (hSNMF), which performs spatially regularized NMF followed by Leiden clustering on a hybrid adjacency that integrates spatial proximity (via a contact-radius graph) and transcriptomic similarity through a tunable mixing parameter alpha. Evaluated on a cholangiocarcinoma dataset, SNMF and hSNMF achieve markedly improved spatial compactness (CHAOS < 0.004, Moran's I > 0.96), greater cluster separability (Silhouette > 0.12, DBI < 1.8), and higher biological coherence (CMC and enrichment) compared to other spatial baselines. Availability and implementation: https://github.com/ishtyaqmahmud/hSNMF.
{"title":"hSNMF: Hybrid Spatially Regularized NMF for Image-Derived Spatial Transcriptomics.","authors":"Md Ishtyaq Mahmud, Veena Kochat, Suresh Satpati, Jagan Mohan Reddy Dwarampudi, Humaira Anzum, Kunal Rai, Tania Banerjee","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>High-resolution spatial transcriptomics platforms, such as Xenium, generate single-cell images that capture both molecular and spatial context, but their extremely high dimensionality poses major challenges for representation learning and clustering. In this study, we analyze data from the Xenium platform, which captures high-resolution images of tumor microarray (TMA) tissues and converts them into cell-by-gene matrices suitable for computational analysis. We benchmark and extend nonnegative matrix factorization (NMF) for spatial transcriptomics by introducing two spatially regularized variants. First, we propose Spatial NMF (SNMF), a lightweight baseline that enforces local spatial smoothness by diffusing each cell's NMF factor vector over its spatial neighborhood. Second, we introduce Hybrid Spatial NMF (hSNMF), which performs spatially regularized NMF followed by Leiden clustering on a hybrid adjacency that integrates spatial proximity (via a contact-radius graph) and transcriptomic similarity through a tunable mixing parameter alpha. Evaluated on a cholangiocarcinoma dataset, SNMF and hSNMF achieve markedly improved spatial compactness (CHAOS < 0.004, Moran's I > 0.96), greater cluster separability (Silhouette > 0.12, DBI < 1.8), and higher biological coherence (CMC and enrichment) compared to other spatial baselines. Availability and implementation: https://github.com/ishtyaqmahmud/hSNMF.</p>","PeriodicalId":93888,"journal":{"name":"ArXiv","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12889855/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146168337","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bhavyahshree Navaneetha Krishnan, Adel Heydarabadipour, Herbert Sauro
The BioModels database is one of the premier databases for computational models in systems biology. The database contains over 1000 curated models and an even larger number of non-curated models. All the models are stored in the machine-readable format, SBML. Although SBML can be translated into the human readable Antimony format, analyzing the models can still be time consuming. In order to bridge this gap, a LLM (large language model) assistant was created to analyze the BioModels and allow interaction between the user and the model using natural language. By doing so, a user can easily and rapidly extract the salient points in a given model. Our analysis workflow involved 'chunking' BioModels and converting them to plain text using llama3, and then embedding them in a ChromaDB database. The user-provided query was also embedded, and a similarity search was performed between the query and the BioModels in ChromaDB to extract the most relevant BioModels. The BioModels were then used as context to create the most accurate output in the chat between the user and the LLM. This approach greatly minimized the chance of hallucination and kept the LLM focused on the problem at hand. We illustrate the utility of this approach with a number of examples. The code is available at https://github.com/TheBobBob/BioModelsRAG. The website implementation is available at https://biomodelsrag.streamlit.app/.
{"title":"BioModelsRAG: A Biological Modeling Assistant Using RAG (Retrieval Augmented Generation).","authors":"Bhavyahshree Navaneetha Krishnan, Adel Heydarabadipour, Herbert Sauro","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>The BioModels database is one of the premier databases for computational models in systems biology. The database contains over 1000 curated models and an even larger number of non-curated models. All the models are stored in the machine-readable format, SBML. Although SBML can be translated into the human readable Antimony format, analyzing the models can still be time consuming. In order to bridge this gap, a LLM (large language model) assistant was created to analyze the BioModels and allow interaction between the user and the model using natural language. By doing so, a user can easily and rapidly extract the salient points in a given model. Our analysis workflow involved 'chunking' BioModels and converting them to plain text using llama3, and then embedding them in a ChromaDB database. The user-provided query was also embedded, and a similarity search was performed between the query and the BioModels in ChromaDB to extract the most relevant BioModels. The BioModels were then used as context to create the most accurate output in the chat between the user and the LLM. This approach greatly minimized the chance of hallucination and kept the LLM focused on the problem at hand. We illustrate the utility of this approach with a number of examples. The code is available at https://github.com/TheBobBob/BioModelsRAG. The website implementation is available at https://biomodelsrag.streamlit.app/.</p>","PeriodicalId":93888,"journal":{"name":"ArXiv","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12869393/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146127811","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mattia Romeo, Cesare Gagliardo, Grazia Cottone, Giorgio Collura, Enrico Maggio, Claudio Runfola, Eleonora Bruno, Maria Cristina D'Oca, Massimo Midiri, Francesca Lizzi, Ian Postuma, Marco D'Amelio, Alessandro Lascialfari, Alessandra Retico, Maurizio Marrale
In the last years in-vivo tractography has assumed an important role in neurosciences, for both research and clinical applications such as non-invasive investigation of brain connectivity and presurgical planning in neurosurgery. In more recent years there has been a growing interest in the applications of diffusion tractography for target identification in functional neurological disorders for an increasingly tailored approach. The growing diffusion of well-established neurosurgical procedures, such as deep brain stimulation or trans-cranial Magnetic Resonance-guided Focused Ultrasound, favored this trend. Tractography can indeed provide more accurate, patient-specific, information about the targeted region if compared to stereotactic atlases. On the other hand, this tractography-based approach is not very physician-friendly, and its heavily time consuming since it needs several hours for Magnetic Resonance Imaging data processing. In this study we propose a novel open-source deep learning framework called DeLTA-BIT (acronym of Deep-learning Local TrActography for BraIn Targeting) for fast target predictions, based on probabilistic tractography. The proposed framework exploits a convolutional neural network (CNN) to predict the location of the Ventral Intermediate Nucleus of the thalamus (VIM). The CNN was trained on the Human Connectome Project (HCP) dataset. The model capability in predicting the VIM location was tested both on the HCP (internal validation) and clinical data (external validation). Results from the internal validation have shown good capability in predicting the VIM region (mean DSC = 0.62+- 0.15, mean sDSC=0.76+- 0.17) by using just T1 images as input, in a time scale of fraction of second per subject. As for the clinical data, results have been compared with an atlas-based method demonstrating similar performance, but within a significantly shorter timeframe.
在过去的几年中,体内神经束造影在神经科学研究和临床应用中发挥了重要作用,例如脑连接的非侵入性研究和神经外科手术前计划。近年来,越来越多的人对应用扩散神经束造影来识别功能性神经疾病的靶标越来越感兴趣。成熟的神经外科手术,如深部脑刺激或经颅磁共振引导的聚焦超声,日益普及,有利于这一趋势。与立体定向地图集相比,神经束造影确实可以提供更准确的、针对患者的目标区域信息。另一方面,这种基于肌束造影的方法对医生来说不是很友好,而且它非常耗时,因为它需要几个小时的磁共振成像数据处理。在这项研究中,我们提出了一种新的开源深度学习框架,称为DeLTA-BIT (deep -learning Local TrActography for BraIn Targeting的缩写),用于基于概率TrActography的快速目标预测。该框架利用卷积神经网络(CNN)来预测丘脑腹侧中间核(VIM)的位置。CNN是在人类连接体项目(Human Connectome Project, HCP)数据集上训练的。在HCP(内部验证)和临床数据(外部验证)上测试了模型预测VIM位置的能力。内部验证的结果表明,仅使用T1图像作为输入,在每个受试者的秒分之一的时间尺度上,可以很好地预测VIM区域(平均DSC= 0.62+- 0.15,平均sDSC=0.76+- 0.17)。至于临床数据,结果已与基于地图集的方法进行了比较,显示出相似的性能,但在更短的时间内。
{"title":"DeLTA-BIT: an open-source probabilistic tractography-based deep learning framework for thalamic targeting in functional neurological disorders.","authors":"Mattia Romeo, Cesare Gagliardo, Grazia Cottone, Giorgio Collura, Enrico Maggio, Claudio Runfola, Eleonora Bruno, Maria Cristina D'Oca, Massimo Midiri, Francesca Lizzi, Ian Postuma, Marco D'Amelio, Alessandro Lascialfari, Alessandra Retico, Maurizio Marrale","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>In the last years in-vivo tractography has assumed an important role in neurosciences, for both research and clinical applications such as non-invasive investigation of brain connectivity and presurgical planning in neurosurgery. In more recent years there has been a growing interest in the applications of diffusion tractography for target identification in functional neurological disorders for an increasingly tailored approach. The growing diffusion of well-established neurosurgical procedures, such as deep brain stimulation or trans-cranial Magnetic Resonance-guided Focused Ultrasound, favored this trend. Tractography can indeed provide more accurate, patient-specific, information about the targeted region if compared to stereotactic atlases. On the other hand, this tractography-based approach is not very physician-friendly, and its heavily time consuming since it needs several hours for Magnetic Resonance Imaging data processing. In this study we propose a novel open-source deep learning framework called DeLTA-BIT (acronym of Deep-learning Local TrActography for BraIn Targeting) for fast target predictions, based on probabilistic tractography. The proposed framework exploits a convolutional neural network (CNN) to predict the location of the Ventral Intermediate Nucleus of the thalamus (VIM). The CNN was trained on the Human Connectome Project (HCP) dataset. The model capability in predicting the VIM location was tested both on the HCP (internal validation) and clinical data (external validation). Results from the internal validation have shown good capability in predicting the VIM region (mean DSC = 0.62+- 0.15, mean sDSC=0.76+- 0.17) by using just T1 images as input, in a time scale of fraction of second per subject. As for the clinical data, results have been compared with an atlas-based method demonstrating similar performance, but within a significantly shorter timeframe.</p>","PeriodicalId":93888,"journal":{"name":"ArXiv","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12869390/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146127872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Francesco Boccardo, Simone Di Marino, Agnese Seminara
We address the problem of how individuals can integrate efficiently their private behavior with information provided by others within a group. To this end, we consider the model of collective search introduced in [https://doi.org/10.1103/PhysRevE.102.012402], under a minimal setting with no olfactory information. Agents combine a private exploratory behavior and a social imitation consisting in aligning to their neighbors, and weigh the two contributions with a single ``trust" parameter that controls their relative influence. We find that an optimal trust parameter exists even in the absence of olfactory information, as was observed in the original model. Optimality is dictated by the need to explore the minimal region of space that contains the target. An optimal trust parameter emerges from this constraint because it it tunes imitation, which induces a collective mechanism of inertia affecting the size and path of the swarm. We predict the optimal trust parameter for cohesive groups where all agents interact with one another. We show how optimality depends on the initialization of the agents and the unknown location of the target, in close agreement with numerical simulations. Our results may be leveraged to optimize the design of swarm robotics or to understand information integration in organisms with decentralized nervous systems such as cephalopods.
{"title":"Zero-information limit of a collective olfactory search model.","authors":"Francesco Boccardo, Simone Di Marino, Agnese Seminara","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>We address the problem of how individuals can integrate efficiently their private behavior with information provided by others within a group. To this end, we consider the model of collective search introduced in [https://doi.org/10.1103/PhysRevE.102.012402], under a minimal setting with no olfactory information. Agents combine a private exploratory behavior and a social imitation consisting in aligning to their neighbors, and weigh the two contributions with a single ``trust\" parameter that controls their relative influence. We find that an optimal trust parameter exists even in the absence of olfactory information, as was observed in the original model. Optimality is dictated by the need to explore the minimal region of space that contains the target. An optimal trust parameter emerges from this constraint because it it tunes imitation, which induces a collective mechanism of inertia affecting the size and path of the swarm. We predict the optimal trust parameter for cohesive groups where all agents interact with one another. We show how optimality depends on the initialization of the agents and the unknown location of the target, in close agreement with numerical simulations. Our results may be leveraged to optimize the design of swarm robotics or to understand information integration in organisms with decentralized nervous systems such as cephalopods.</p>","PeriodicalId":93888,"journal":{"name":"ArXiv","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12869412/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146127822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}