Scroll-wave instabilities in excitable domains are central to life-threatening arrhythmias, yet practical methods to stabilize these dynamics remain limited. Here, we investigate the effects of boundary layer heterogeneities in the spatiotemporal dynamics of a quasi-2D semidiscrete excitable model. We reveal that a novel boundary-driven mechanism suppresses meandering and chaotic spiral dynamics. We show how the strength of the heterogeneities mediates the emergence of this regulation through a pinning-unpinning-like transition. We derive a reduced 2D model and find that a decrease in bulk excitability and a boundary-driven delayed-feedback underlie the stabilization. Our results may point to alternative methods to control arrhythmias.
{"title":"Boundary-driven delayed-feedback control of spatiotemporal dynamics in excitable media.","authors":"Sebastián Echeverría-Alar, Wouter-Jan Rappel","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Scroll-wave instabilities in excitable domains are central to life-threatening arrhythmias, yet practical methods to stabilize these dynamics remain limited. Here, we investigate the effects of boundary layer heterogeneities in the spatiotemporal dynamics of a quasi-2D semidiscrete excitable model. We reveal that a novel boundary-driven mechanism suppresses meandering and chaotic spiral dynamics. We show how the strength of the heterogeneities mediates the emergence of this regulation through a pinning-unpinning-like transition. We derive a reduced 2D model and find that a decrease in bulk excitability and a boundary-driven delayed-feedback underlie the stabilization. Our results may point to alternative methods to control arrhythmias.</p>","PeriodicalId":93888,"journal":{"name":"ArXiv","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12687857/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145727700","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Po-Yi Lee, Chuan-Bor Chueh, Milen Shishkov, Tai-Ang Wang, Hsiang-Chieh Lee, Teresa Chen, Brett E Bouma, Martin Villiger
Polarization-sensitive optical coherence tomography (PS-OCT) extends OCT by analyzing the polarization states of backscattered light to quantify tissue birefringence. However, conventional implementations require polarization-diverse detection and are therefore incompatible with most commercial OCT systems. As a result, PS-OCT has largely remained restricted to specialized research groups, limiting its broader scientific and clinical use. Here, we present a modular PS-OCT framework that integrates with a standard spectral-domain OCT platform through a detachable rotating achromatic half-wave plate in the sample arm. This waveplate modulates both incident and reflected polarization states. Three or more repeated measurements at distinct waveplate orientations enable reconstruction of the sample's round-trip Jones matrix and the corresponding polarization properties. To mitigate random phase variations between repeated measurements, we introduce a retarder-constrained phase optimization strategy. We validate the framework with imaging of birefringent phantoms and the human retina in vivo, demonstrating reliable reconstruction of retardance and optic axis orientation. This approach requires only minimal hardware modification and is readily deployable on mainstream OCT systems. Lowering technical barriers paves the way for rapid and widespread deployment of PS-OCT across diverse biomedical applications in both research and clinical environments.
{"title":"Polarization-Sensitive Module for Optical Coherence Tomography Instruments.","authors":"Po-Yi Lee, Chuan-Bor Chueh, Milen Shishkov, Tai-Ang Wang, Hsiang-Chieh Lee, Teresa Chen, Brett E Bouma, Martin Villiger","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Polarization-sensitive optical coherence tomography (PS-OCT) extends OCT by analyzing the polarization states of backscattered light to quantify tissue birefringence. However, conventional implementations require polarization-diverse detection and are therefore incompatible with most commercial OCT systems. As a result, PS-OCT has largely remained restricted to specialized research groups, limiting its broader scientific and clinical use. Here, we present a modular PS-OCT framework that integrates with a standard spectral-domain OCT platform through a detachable rotating achromatic half-wave plate in the sample arm. This waveplate modulates both incident and reflected polarization states. Three or more repeated measurements at distinct waveplate orientations enable reconstruction of the sample's round-trip Jones matrix and the corresponding polarization properties. To mitigate random phase variations between repeated measurements, we introduce a retarder-constrained phase optimization strategy. We validate the framework with imaging of birefringent phantoms and the human retina in vivo, demonstrating reliable reconstruction of retardance and optic axis orientation. This approach requires only minimal hardware modification and is readily deployable on mainstream OCT systems. Lowering technical barriers paves the way for rapid and widespread deployment of PS-OCT across diverse biomedical applications in both research and clinical environments.</p>","PeriodicalId":93888,"journal":{"name":"ArXiv","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12642765/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145607893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
César Nieto, Sayeh Rezaee, Cesar Augusto Vargas-Garcia, Abhyudai Singh
Cells achieve size homeostasis by regulating their division timing based on their size, added size, and cell cycle time. Previous research under steady-state conditions demonstrated the robustness of these mechanisms. However, their dynamic responses in fluctuating environments, such as nutrient depletion due to population growth, remain challenging to fully characterize. Currently, advances in single-cell microscopy have revealed various cellular division strategies whose underlying molecular mechanisms are complex and not always available. This study introduces a novel approach to model cell size dynamics using a piecewise deterministic Markov chain framework, where cell division events are modeled as stochastic jumps determined by a division propensity dependent on both current cell size and added size since birth. We propose a three-parameter characterization for the division process: scale (target added size at division), shape (division stochasticity), and division strategy (relevance of cell size, added size, or cell cycle duration). We derive analytical formulas for the probability of division, and with this probability, we develop a maximum likelihood estimation (MLE) framework. We implement a systematic investigation of the accuracy of inference as a function of sample size. The model's performance is studied across various scenarios, including those exhibiting dynamical changes in one or more parameters, suggesting its broad applicability for analyzing new experimental data on cell size regulation in dynamic environments.
{"title":"Dynamical Inference of Cell Size Regulation Parameters.","authors":"César Nieto, Sayeh Rezaee, Cesar Augusto Vargas-Garcia, Abhyudai Singh","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Cells achieve size homeostasis by regulating their division timing based on their size, added size, and cell cycle time. Previous research under steady-state conditions demonstrated the robustness of these mechanisms. However, their dynamic responses in fluctuating environments, such as nutrient depletion due to population growth, remain challenging to fully characterize. Currently, advances in single-cell microscopy have revealed various cellular division strategies whose underlying molecular mechanisms are complex and not always available. This study introduces a novel approach to model cell size dynamics using a piecewise deterministic Markov chain framework, where cell division events are modeled as stochastic jumps determined by a division propensity dependent on both current cell size and added size since birth. We propose a three-parameter characterization for the division process: scale (target added size at division), shape (division stochasticity), and division strategy (relevance of cell size, added size, or cell cycle duration). We derive analytical formulas for the probability of division, and with this probability, we develop a maximum likelihood estimation (MLE) framework. We implement a systematic investigation of the accuracy of inference as a function of sample size. The model's performance is studied across various scenarios, including those exhibiting dynamical changes in one or more parameters, suggesting its broad applicability for analyzing new experimental data on cell size regulation in dynamic environments.</p>","PeriodicalId":93888,"journal":{"name":"ArXiv","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12676386/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145703360","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xilin Yang, Musa Aydin, Yuhong Lu, Sahan Yoruc Selcuk, Bijie Bai, Yijie Zhang, Andrew Birkeland, Katjana Ehrlich, Julien Bec, Laura Marcu, Nir Pillar, Aydogan Ozcan
Assessing resection margins is central to pathological specimen evaluation and has profound implications for patient outcomes. Current practice employs physical inking, which is applied variably, and cautery artifacts can obscure the true margin on histological sections. We present a virtual inking network (VIN) that autonomously localizes the surgical cut surface on whole-slide images, reducing reliance on inks and standardizing margin-focused review. VIN uses a frozen foundation model as the feature extractor and a compact two-layer multilayer perceptron trained for patch-level classification of cautery-consistent features. The dataset comprised 120 hematoxylin and eosin (H&E) stained slides from 12 human tonsil tissue blocks, resulting in ~2 TB of uncompressed raw image data, where a board-certified pathologist provided boundary annotations. In blind testing with 20 slides from previously unseen blocks, VIN produced coherent margin overlays that qualitatively aligned with expert annotations across serial sections. Quantitatively, region-level accuracy was ~73.3% across the test set, with errors largely confined to limited areas that did not disrupt continuity of the whole-slide margin map. These results indicate that VIN captures cautery-related histomorphology and can provide a reproducible, ink-free margin delineation suitable for integration into routine digital pathology workflows and for downstream measurement of margin distances.
{"title":"Autonomous labeling of surgical resection margins using a foundation model.","authors":"Xilin Yang, Musa Aydin, Yuhong Lu, Sahan Yoruc Selcuk, Bijie Bai, Yijie Zhang, Andrew Birkeland, Katjana Ehrlich, Julien Bec, Laura Marcu, Nir Pillar, Aydogan Ozcan","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Assessing resection margins is central to pathological specimen evaluation and has profound implications for patient outcomes. Current practice employs physical inking, which is applied variably, and cautery artifacts can obscure the true margin on histological sections. We present a virtual inking network (VIN) that autonomously localizes the surgical cut surface on whole-slide images, reducing reliance on inks and standardizing margin-focused review. VIN uses a frozen foundation model as the feature extractor and a compact two-layer multilayer perceptron trained for patch-level classification of cautery-consistent features. The dataset comprised 120 hematoxylin and eosin (H&E) stained slides from 12 human tonsil tissue blocks, resulting in ~2 TB of uncompressed raw image data, where a board-certified pathologist provided boundary annotations. In blind testing with 20 slides from previously unseen blocks, VIN produced coherent margin overlays that qualitatively aligned with expert annotations across serial sections. Quantitatively, region-level accuracy was ~73.3% across the test set, with errors largely confined to limited areas that did not disrupt continuity of the whole-slide margin map. These results indicate that VIN captures cautery-related histomorphology and can provide a reproducible, ink-free margin delineation suitable for integration into routine digital pathology workflows and for downstream measurement of margin distances.</p>","PeriodicalId":93888,"journal":{"name":"ArXiv","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12676371/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145703362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shijie Yan, Douglas Dwyer, David R Kaeli, Qianqian Fang
Significance: Monte Carlo (MC) methods are the gold-standard for modeling light-tissue interactions due to their accuracy. Mesh-based MC (MMC) offers enhanced precision for complex tissue structures using tetrahedral mesh models. Despite significant speedups achieved on graphics processing units (GPUs), MMC performance remains hindered by the computational cost of frequent ray-boundary intersection tests.
Aim: We propose a highly accelerated MMC algorithm, RT-MMC, that leverages the hardware-accelerated ray traversal and intersection capabilities of ray-tracing cores (RT-cores) on modern GPUs.
Approach: Implemented using NVIDIA's OptiX platform, RT-MMC extends graphics ray-tracing pipelines towards volumetric ray-tracing in turbid media, eliminating the need for challenging tetrahedral mesh generation while delivering significant speed improvements through hardware acceleration. It also intrinsically supports wide-field sources without complex mesh retesselation.
Results: RT-MMC demonstrates excellent agreement with traditional software-ray-tracing MMC algorithms while achieving 1.5× to 4.5× speedups across multiple GPU architectures. These performance gains significantly enhance the practicality of MMC for routine simulations.
Conclusion: Migration from software- to hardware-based ray-tracing not only greatly simplifies MMC simulation workflows, but also results in significant speedups that are expected to increase further as ray-tracing hardware rapidly gains adoption. Adoption of graphics ray-tracing pipelines in quantitative MMC simulations enables leveraging of emerging hardware resources and benefits a wide range of biophotonics applications.
{"title":"Accelerating mesh-based Monte Carlo simulations using contemporary graphics ray-tracing hardware.","authors":"Shijie Yan, Douglas Dwyer, David R Kaeli, Qianqian Fang","doi":"","DOIUrl":"","url":null,"abstract":"<p><strong>Significance: </strong>Monte Carlo (MC) methods are the gold-standard for modeling light-tissue interactions due to their accuracy. Mesh-based MC (MMC) offers enhanced precision for complex tissue structures using tetrahedral mesh models. Despite significant speedups achieved on graphics processing units (GPUs), MMC performance remains hindered by the computational cost of frequent ray-boundary intersection tests.</p><p><strong>Aim: </strong>We propose a highly accelerated MMC algorithm, RT-MMC, that leverages the hardware-accelerated ray traversal and intersection capabilities of ray-tracing cores (RT-cores) on modern GPUs.</p><p><strong>Approach: </strong>Implemented using NVIDIA's OptiX platform, RT-MMC extends graphics ray-tracing pipelines towards volumetric ray-tracing in turbid media, eliminating the need for challenging tetrahedral mesh generation while delivering significant speed improvements through hardware acceleration. It also intrinsically supports wide-field sources without complex mesh retesselation.</p><p><strong>Results: </strong>RT-MMC demonstrates excellent agreement with traditional software-ray-tracing MMC algorithms while achieving 1.5× to 4.5× speedups across multiple GPU architectures. These performance gains significantly enhance the practicality of MMC for routine simulations.</p><p><strong>Conclusion: </strong>Migration from software- to hardware-based ray-tracing not only greatly simplifies MMC simulation workflows, but also results in significant speedups that are expected to increase further as ray-tracing hardware rapidly gains adoption. Adoption of graphics ray-tracing pipelines in quantitative MMC simulations enables leveraging of emerging hardware resources and benefits a wide range of biophotonics applications.</p>","PeriodicalId":93888,"journal":{"name":"ArXiv","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12676385/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145703373","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kwok-Shing Chan, Hansol Lee, Yixin Ma, Berkin Bilgic, Susie Y Huang, Hong-Hsi Lee, José P Marques
Quantitative MRI (qMRI) offers tissue-specific biomarkers that can be tracked over time or compared across populations; however, its adoption in clinical research is hindered by significant computational demands of parameter estimation. Images acquired at high spatial resolution and/or requiring fitting for multiple parameters often require lengthy processing time, constraining their use in routine pipelines and slowing methodological innovation and clinical translation. We present GACELLE, an open source, GPU-accelerated framework for high-throughput qMRI analysis. GACELLE unifies a stochastic gradient descent optimiser (askadam.m) and a stochastic sampler (mcmc.m) under a common interface in MATLAB, enabling fast parameter mapping, improved estimation robustness via spatial regularisation, and uncertainty quantification. GACELLE prioritises accessibility and ease of integration: users only need to provide a forward signal model, while GACELLE's backend manages computational parallelisation, automatic parameter updates, and memory-efficient batching. The stochastic solver performs fully vectorised Markov chain Monte Carlo with identical likelihoods on CPU and GPU, ensuring reproducibility across hardware. Benchmarking demonstrates up to 451-fold acceleration for the stochastic gradient descent solver and 14,380-fold acceleration for stochastic sampling compared to CPU-based estimation, without compromising quantitative accuracy. We demonstrate GACELLE's versatility on three representative qMRI models and on an image reconstruction task. Across these applications, GACELLE improves parameter precision, enhances test-retest reproducibility, and reduces noise in quantitative maps. By combining speed, usability and flexibility, GACELLE provides a generalisable optimisation framework for medical image analysis. It lowers the computational barrier for advanced qMRI, paving the way for reproducible biomarker development, large-scale imaging studies, and clinical translation.
{"title":"<i>GACELLE:</i> GPU-accelerated tools for model parameter estimation and image reconstruction.","authors":"Kwok-Shing Chan, Hansol Lee, Yixin Ma, Berkin Bilgic, Susie Y Huang, Hong-Hsi Lee, José P Marques","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Quantitative MRI (qMRI) offers tissue-specific biomarkers that can be tracked over time or compared across populations; however, its adoption in clinical research is hindered by significant computational demands of parameter estimation. Images acquired at high spatial resolution and/or requiring fitting for multiple parameters often require lengthy processing time, constraining their use in routine pipelines and slowing methodological innovation and clinical translation. We present <i>GACELLE</i>, an open source, GPU-accelerated framework for high-throughput qMRI analysis. <i>GACELLE</i> unifies a stochastic gradient descent optimiser (<i>askadam.m</i>) and a stochastic sampler (<i>mcmc.m</i>) under a common interface in MATLAB, enabling fast parameter mapping, improved estimation robustness via spatial regularisation, and uncertainty quantification. <i>GACELLE</i> prioritises accessibility and ease of integration: users only need to provide a forward signal model, while <i>GACELLE</i>'s backend manages computational parallelisation, automatic parameter updates, and memory-efficient batching. The stochastic solver performs fully vectorised Markov chain Monte Carlo with identical likelihoods on CPU and GPU, ensuring reproducibility across hardware. Benchmarking demonstrates up to 451-fold acceleration for the stochastic gradient descent solver and 14,380-fold acceleration for stochastic sampling compared to CPU-based estimation, without compromising quantitative accuracy. We demonstrate <i>GACELLE</i>'s versatility on three representative qMRI models and on an image reconstruction task. Across these applications, <i>GACELLE</i> improves parameter precision, enhances test-retest reproducibility, and reduces noise in quantitative maps. By combining speed, usability and flexibility, <i>GACELLE</i> provides a generalisable optimisation framework for medical image analysis. It lowers the computational barrier for advanced qMRI, paving the way for reproducible biomarker development, large-scale imaging studies, and clinical translation.</p>","PeriodicalId":93888,"journal":{"name":"ArXiv","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12676372/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145703380","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Non-invasive Brain-Computer Interfaces (BCIs) based on Code-Modulated Visual Evoked Potentials (C-VEPs) require highly robust decoding methods to address temporal variability and session-dependent noise in EEG signals. This study proposes and evaluates several deep learning architectures, including convolutional neural networks (CNNs) for 63-bit m-sequence reconstruction and classification, and Siamese networks for similarity-based decoding, alongside canonical correlation analysis (CCA) baselines. EEG data were recorded from 13 healthy adults under single-target flicker stimulation. The proposed deep models significantly outperformed traditional approaches, with distance-based decoding using Earth Mover's Distance (EMD) and constrained EMD showing greater robustness to latency variations than Euclidean and Mahalanobis metrics. Temporal data augmentation with small shifts further improved generalization across sessions. Among all models, the multi-class Siamese network achieved the best overall performance with an average accuracy of 96.89%, demonstrating the potential of data-driven deep architectures for reliable, single-trial C-VEP decoding in adaptive non-invasive BCI systems.
{"title":"Deep Learning Architectures for Code-Modulated Visual Evoked Potentials Detection.","authors":"Kiran Nair, Hubert Cecotti","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Non-invasive Brain-Computer Interfaces (BCIs) based on Code-Modulated Visual Evoked Potentials (C-VEPs) require highly robust decoding methods to address temporal variability and session-dependent noise in EEG signals. This study proposes and evaluates several deep learning architectures, including convolutional neural networks (CNNs) for 63-bit m-sequence reconstruction and classification, and Siamese networks for similarity-based decoding, alongside canonical correlation analysis (CCA) baselines. EEG data were recorded from 13 healthy adults under single-target flicker stimulation. The proposed deep models significantly outperformed traditional approaches, with distance-based decoding using Earth Mover's Distance (EMD) and constrained EMD showing greater robustness to latency variations than Euclidean and Mahalanobis metrics. Temporal data augmentation with small shifts further improved generalization across sessions. Among all models, the multi-class Siamese network achieved the best overall performance with an average accuracy of 96.89%, demonstrating the potential of data-driven deep architectures for reliable, single-trial C-VEP decoding in adaptive non-invasive BCI systems.</p>","PeriodicalId":93888,"journal":{"name":"ArXiv","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12676383/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145703369","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lijie Huang, Jingyi Yin, Jingke Zhang, U-Wai Lok, Ryan M DeRuiter, Kaipeng Ji, Yanzhe Zhao, Tao Wu, James D Krier, Xiang-Yang Zhu, Andrew J Bentall, Andrew D Rule, Thomas D Atwell, Lilach O Lerman, Shigao Chen, Chengwu Huang
Objective: Hyper-clutter artifacts (HCA), arising from strong tissue reflections or physiological motion, present persistent challenges in ultrafast ultrasound Doppler imaging, often obscuring surrounding small vessel flow signals, especially in fascial regions such as the renal capsule. This study proposes U-profile-based decluttering (UPBD), a robust and computationally efficient method that exploits singular value decomposition (SVD)-derived spatial singular vectors to suppress HCA in ultrafast Doppler imaging.
Methods: UPBD analyzes intensity profile of each pixel along the singular-order dimension of the SVD-derived left singular vectors U. A pixel-wise clutter-energy ratio is computed to derive a spatially adaptive declutter weighting map, which is applied to the SVD-filtered flow signals.
Results: UPBD was evaluated on multiple in vivo datasets. Quantitative assessments based on contrast-to-noise ratio (CNR) and contrast-to-tissue ratio (CTR) demonstrated significant improvements over conventional SVD filtering. For example, UPBD enhanced CTR from 7.3 dB to 21.7 dB in contrast-free pig kidney, 17.8 dB to 42.1 dB in contrast-enhanced pig kidney, 8.2 dB to 32.8 dB in human kidney, and -4.9 dB to 3.7 dB in 3D human liver.
Conclusion: The proposed UPBD method effectively suppresses HCA while preserving blood flow signals with minimal extra computational cost and no need for extensive parameter tuning.
Significance: UPBD serves as a lightweight, easily integrated post-processing method that enhances HCA suppression, enabling broader application of SVD-based ultrafast Doppler imaging.
{"title":"Effective Hyper-clutter Artifacts Suppression for Ultrafast Ultrasound Doppler Imaging.","authors":"Lijie Huang, Jingyi Yin, Jingke Zhang, U-Wai Lok, Ryan M DeRuiter, Kaipeng Ji, Yanzhe Zhao, Tao Wu, James D Krier, Xiang-Yang Zhu, Andrew J Bentall, Andrew D Rule, Thomas D Atwell, Lilach O Lerman, Shigao Chen, Chengwu Huang","doi":"","DOIUrl":"","url":null,"abstract":"<p><strong>Objective: </strong>Hyper-clutter artifacts (HCA), arising from strong tissue reflections or physiological motion, present persistent challenges in ultrafast ultrasound Doppler imaging, often obscuring surrounding small vessel flow signals, especially in fascial regions such as the renal capsule. This study proposes U-profile-based decluttering (UPBD), a robust and computationally efficient method that exploits singular value decomposition (SVD)-derived spatial singular vectors to suppress HCA in ultrafast Doppler imaging.</p><p><strong>Methods: </strong>UPBD analyzes intensity profile of each pixel along the singular-order dimension of the SVD-derived left singular vectors U. A pixel-wise clutter-energy ratio is computed to derive a spatially adaptive declutter weighting map, which is applied to the SVD-filtered flow signals.</p><p><strong>Results: </strong>UPBD was evaluated on multiple in vivo datasets. Quantitative assessments based on contrast-to-noise ratio (CNR) and contrast-to-tissue ratio (CTR) demonstrated significant improvements over conventional SVD filtering. For example, UPBD enhanced CTR from 7.3 dB to 21.7 dB in contrast-free pig kidney, 17.8 dB to 42.1 dB in contrast-enhanced pig kidney, 8.2 dB to 32.8 dB in human kidney, and -4.9 dB to 3.7 dB in 3D human liver.</p><p><strong>Conclusion: </strong>The proposed UPBD method effectively suppresses HCA while preserving blood flow signals with minimal extra computational cost and no need for extensive parameter tuning.</p><p><strong>Significance: </strong>UPBD serves as a lightweight, easily integrated post-processing method that enhances HCA suppression, enabling broader application of SVD-based ultrafast Doppler imaging.</p>","PeriodicalId":93888,"journal":{"name":"ArXiv","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12676373/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145703382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jingke Zhang, Jingyi Yin, U-Wai Lok, Lijie Huang, Ryan M DeRuiter, Tao Wu, Kaipeng Ji, Yanzhe Zhao, James D Krier, Xiang-Yang Zhu, Lilach O Lerman, Chengwu Huang, Shigao Chen
Three-dimensional ultrasound localization microscopy (ULM) enables comprehensive visualization of the vasculature, thereby improving diagnostic reliability. Nevertheless, its clinical translation remains challenging, as the exponential growth in voxel count for full 3D reconstruction imposes heavy computational demands and extensive post-processing time. In this row-column array (RCA)-based 3D in vivo pig kidney ULM study, we reformulate each step of the full 3D ULM pipeline, including beamforming, clutter filtering, motion estimation, microbubble separation and localization into a series of computational-efficient 2D operations, substantially reducing the number of voxels to be processed while maintaining comparable accuracy. The proposed framework reconstructs each 0.75-s ensemble acquired at frame rate of 400 Hz, covering a 25*27.4*27.4 mm3 volume, in 0.52 s (70% of the acquisition time) on a single RTX A6000 Ada GPU, while maintaining ULM image quality comparable to conventional 3D processing. Quantitatively, it achieves a structural similarity index (SSIM) of 0.93 between density maps and a voxel-wise velocity agreement with slope of 0.93 and R2 = 0.88, closely matching conventional 3D results, and for the first time, demonstrating potential for real-time feedback during scanning, which could improve robustness, reduce operator dependence and accelerate clinical workflows.
{"title":"Fast 3D Ultrasound Localization Microscopy via Projection-based Processing Framework.","authors":"Jingke Zhang, Jingyi Yin, U-Wai Lok, Lijie Huang, Ryan M DeRuiter, Tao Wu, Kaipeng Ji, Yanzhe Zhao, James D Krier, Xiang-Yang Zhu, Lilach O Lerman, Chengwu Huang, Shigao Chen","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Three-dimensional ultrasound localization microscopy (ULM) enables comprehensive visualization of the vasculature, thereby improving diagnostic reliability. Nevertheless, its clinical translation remains challenging, as the exponential growth in voxel count for full 3D reconstruction imposes heavy computational demands and extensive post-processing time. In this row-column array (RCA)-based 3D in vivo pig kidney ULM study, we reformulate each step of the full 3D ULM pipeline, including beamforming, clutter filtering, motion estimation, microbubble separation and localization into a series of computational-efficient 2D operations, substantially reducing the number of voxels to be processed while maintaining comparable accuracy. The proposed framework reconstructs each 0.75-s ensemble acquired at frame rate of 400 Hz, covering a 25*27.4*27.4 mm3 volume, in 0.52 s (70% of the acquisition time) on a single RTX A6000 Ada GPU, while maintaining ULM image quality comparable to conventional 3D processing. Quantitatively, it achieves a structural similarity index (SSIM) of 0.93 between density maps and a voxel-wise velocity agreement with slope of 0.93 and R2 = 0.88, closely matching conventional 3D results, and for the first time, demonstrating potential for real-time feedback during scanning, which could improve robustness, reduce operator dependence and accelerate clinical workflows.</p>","PeriodicalId":93888,"journal":{"name":"ArXiv","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12676384/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145703311","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Eric Leonardis, Akira Nagamori, Ayesha Thanawalla, Yuanjia Yang, Joshua Park, Hutton Saunders, Eiman Azim, Talmo D Pereira
The brain has evolved to effectively control the body, and in order to understand the relationship we need to model the sensorimotor transformations underlying embodied control. As part of a coordinated effort, we are developing a general-purpose platform for data-driven simulation modeling high fidelity behavioral dynamics, biomechanics, and neural circuit architectures underlying embodied control. We present a pipeline for taking kinematics data from the neuroscience lab and creating a pipeline for recapitulating those natural movements in physics simulation. We implement an imitation learning framework to simulate a dexterous forelimb reaching task with a musculoskeletal model in the Mujoco physics environment. The imitation learning model is currently training at more than 1 million training steps per second due to GPU acceleration with JAX and Mujoco-MJX. We present results that indicate that adding naturalistic constraints on control magnitude lead to simulated muscle activity that better predicts real EMG signals. This work provides evidence to suggest that control constraints are critical to modeling biological movement control.
{"title":"Massively Parallel Imitation Learning of Mouse Forelimb Musculoskeletal Reaching Dynamics.","authors":"Eric Leonardis, Akira Nagamori, Ayesha Thanawalla, Yuanjia Yang, Joshua Park, Hutton Saunders, Eiman Azim, Talmo D Pereira","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>The brain has evolved to effectively control the body, and in order to understand the relationship we need to model the sensorimotor transformations underlying embodied control. As part of a coordinated effort, we are developing a general-purpose platform for data-driven simulation modeling high fidelity behavioral dynamics, biomechanics, and neural circuit architectures underlying embodied control. We present a pipeline for taking kinematics data from the neuroscience lab and creating a pipeline for recapitulating those natural movements in physics simulation. We implement an imitation learning framework to simulate a dexterous forelimb reaching task with a musculoskeletal model in the Mujoco physics environment. The imitation learning model is currently training at more than 1 million training steps per second due to GPU acceleration with JAX and Mujoco-MJX. We present results that indicate that adding naturalistic constraints on control magnitude lead to simulated muscle activity that better predicts real EMG signals. This work provides evidence to suggest that control constraints are critical to modeling biological movement control.</p>","PeriodicalId":93888,"journal":{"name":"ArXiv","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12676374/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145702341","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}