Pub Date : 2026-02-05DOI: 10.1088/2057-1976/ae3e95
Mohammed Razzaq Mohammed
Polycaprolactone (PCL), chitosan (CS), and hydroxyapatite (HA) have emerged as complementary biomaterials for the design of advanced scaffolds in tissue engineering (TE). Individually, PCL offers excellent mechanical strength and formability but suffers from hydrophobicity and slow degradation. CS provides biocompatibility, antibacterial properties, and favorable cell-material interactions, yet its insufficient mechanical stability limits standalone use. HA, a bioactive ceramic, enhances osteoconductivity; nevertheless, it is brittle in pure form. Recent advances focus on integrating these three components into hybrid composites to harness their desired characteristics. Novel fabrication approaches, including electrospinning and 3D printing have been optimized to tailor scaffold architecture, porosity, and mechanical integrity. Studies highlight enhanced cellular adhesion and differentiation, as well as improved angiogenic and antibacterial performance when functionalized with bioactive agents or nanoparticles. For instance, the incorporation of nano-HA into the PCL/CS scaffolds markedly boosted skin fibroblast cells (HSF 1184) proliferation, yielding a 23% increase compared to PCL/CS scaffolds by day 3. Besides, HA-PCL/CS nanofibrous composite scaffolds demonstrated a marked improvement in mechanical stiffness, showing an increase of greater than 15% in modulus of elasticity compared to the PCL/CS scaffold. Despite these advances, challenges remain in achieving controlled degradation, uniform dispersion of components, and scalable, reproducible fabrication for clinical translation. This current review fills a critical gap by providing the first comprehensive analysis of advancements in PCL-CS-HA ternary TE systems, an area that remains unexplored despite existing reviews on individual materials and their binary combinations. It analyzes latest developments in PCL-CS-HA composites, highlighting their structure, characteristics, processing strategies, biological outcomes, and future directions.
{"title":"Emerging innovations in polycaprolactone-chitosan-hydroxyapatite composite scaffolds for tissue engineering: a review.","authors":"Mohammed Razzaq Mohammed","doi":"10.1088/2057-1976/ae3e95","DOIUrl":"https://doi.org/10.1088/2057-1976/ae3e95","url":null,"abstract":"<p><p>Polycaprolactone (PCL), chitosan (CS), and hydroxyapatite (HA) have emerged as complementary biomaterials for the design of advanced scaffolds in tissue engineering (TE). Individually, PCL offers excellent mechanical strength and formability but suffers from hydrophobicity and slow degradation. CS provides biocompatibility, antibacterial properties, and favorable cell-material interactions, yet its insufficient mechanical stability limits standalone use. HA, a bioactive ceramic, enhances osteoconductivity; nevertheless, it is brittle in pure form. Recent advances focus on integrating these three components into hybrid composites to harness their desired characteristics. Novel fabrication approaches, including electrospinning and 3D printing have been optimized to tailor scaffold architecture, porosity, and mechanical integrity. Studies highlight enhanced cellular adhesion and differentiation, as well as improved angiogenic and antibacterial performance when functionalized with bioactive agents or nanoparticles. For instance, the incorporation of nano-HA into the PCL/CS scaffolds markedly boosted skin fibroblast cells (HSF 1184) proliferation, yielding a 23% increase compared to PCL/CS scaffolds by day 3. Besides, HA-PCL/CS nanofibrous composite scaffolds demonstrated a marked improvement in mechanical stiffness, showing an increase of greater than 15% in modulus of elasticity compared to the PCL/CS scaffold. Despite these advances, challenges remain in achieving controlled degradation, uniform dispersion of components, and scalable, reproducible fabrication for clinical translation. This current review fills a critical gap by providing the first comprehensive analysis of advancements in PCL-CS-HA ternary TE systems, an area that remains unexplored despite existing reviews on individual materials and their binary combinations. It analyzes latest developments in PCL-CS-HA composites, highlighting their structure, characteristics, processing strategies, biological outcomes, and future directions.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":"12 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2026-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146123588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-04DOI: 10.1088/2057-1976/ae3b47
Yingzhu Wang, Liang Zhang, Yuping Yan
Low-Dose Computed Tomography (LDCT) reduces radiation risk but introduces high noise levels that compromises diagnostic quality. To address this, we propose a Hybrid Generalized Efficient Layer Aggregation Network-UNet (GELAN-UNet) model, which incorporates medical priors into a progressive modular architecture. This design uses medically enhanced modules in shallower layers to capture fine details and computationally efficient blocks in deeper layers to reduce cost. Key innovations include a novel low-frequency retention path and an edge-aware attention mechanism, both crucial for preserving critical diagnostic structures. Evaluated on the public Mayo Clinic dataset, the proposed method achieves a superior peak signal-to-noise ratio (PSNR) of 45.28 dB - a 12.45% improvement over the original LDCT - while maintaining an optimal balance between denoising performance and computational efficiency. The critical importance of the low-frequency path, as revealed by ablation studies, validates the rationality of the hybrid strategy, which is further supported by comparisons with full medical and frequency-aware variants. This work delivers a high-performance denoising model alongside a practical, efficient architectural paradigm - rigorously validated through systematic exploration - for integrating domain-specific medical knowledge into deep learning frameworks.
{"title":"Hybrid GELAN-UNet: integrating medical priors for low-dose CT denoising.","authors":"Yingzhu Wang, Liang Zhang, Yuping Yan","doi":"10.1088/2057-1976/ae3b47","DOIUrl":"10.1088/2057-1976/ae3b47","url":null,"abstract":"<p><p>Low-Dose Computed Tomography (LDCT) reduces radiation risk but introduces high noise levels that compromises diagnostic quality. To address this, we propose a Hybrid Generalized Efficient Layer Aggregation Network-UNet (GELAN-UNet) model, which incorporates medical priors into a progressive modular architecture. This design uses medically enhanced modules in shallower layers to capture fine details and computationally efficient blocks in deeper layers to reduce cost. Key innovations include a novel low-frequency retention path and an edge-aware attention mechanism, both crucial for preserving critical diagnostic structures. Evaluated on the public Mayo Clinic dataset, the proposed method achieves a superior peak signal-to-noise ratio (PSNR) of 45.28 dB - a 12.45% improvement over the original LDCT - while maintaining an optimal balance between denoising performance and computational efficiency. The critical importance of the low-frequency path, as revealed by ablation studies, validates the rationality of the hybrid strategy, which is further supported by comparisons with full medical and frequency-aware variants. This work delivers a high-performance denoising model alongside a practical, efficient architectural paradigm - rigorously validated through systematic exploration - for integrating domain-specific medical knowledge into deep learning frameworks.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2026-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146017369","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-04DOI: 10.1088/2057-1976/ae3b48
Shalaine S Tatu-Qassim, John Paul C Cabahug, Jose Bernardo L Padaca, Laureen Ida M Ballesteros, Ulysses B Ante, Earl John T Geraldo, Vladimir M Sarmiento, Carlos Emmanuel P Garcia, Eugene P Guevara, Jan Risty L Marzon, Mark Christian E Manuel, Chitho P Feliciano
Purpose. This study presents a novel method for fabricating a heterogeneous, tissue-equivalent mouse phantom model using additive manufacturing, together with dosimetric verification for applications in dosimetry for pre-clinical radiation research.Methods. Local Artificial Body for Radiation Analysis and Testing (LABRAT®) mouse phantoms were developed based on the Digimouse model. After 3D rendering, a mold-and-assemble method of additive manufacturing was done using 1:1.3 polyurethane-resin material for lung tissue, 1:1 resin-hardener mixture for soft tissue, and resin with 30% hydroxyapatite for bone. Three types of phantoms were developed: LABRAT A (full mouse), LABRAT B (with ionization chamber provision), and LABRAT C (with axial slices along the head, upper lung, lower lung, abdomen, and spine for film dosimetry). Ionization chamber measurements were performed on LABRAT B under total-body irradiation (TBI) (0.5-2.0 Gy) using 130 kVp, 5.0 mA x-rays at a 23 cm source-to-phantom distance on top of a 5 cm PMMA slab. Film calibration and 2.5 Gy TBI were also conducted on LABRAT C to obtain axial dose maps. Computed tomography (CT) images were obtained, and CT numbers of the phantoms were extracted using Slicer 5.4.0.Results. The fabrication method produced identical LABRAT®phantoms suitable for pre-clinical dosimetry. In the open field plan, the measured dose for the LABRAT B phantom inside the acrylic mouse restrainer was observed to agree by up to ±2.6% of the prescribed dose. Film images revealed the corresponding dose maps in each axial slice, which show gradients corresponding to doses of 0 to 3 Gy. Mean CT numbers were -621 ± 119 HU (lung), 70 ± 40 HU (soft tissue), and 430 ± 138 HU (bone).Conclusion. A heterogeneous mouse phantom was successfully developed and validated for dose verification in pre-clinical irradiation. LABRAT®materials demonstrated appropriate anatomical and radiological equivalence, with accurate dosimetric performance and good geometric agreement with the Digimouse model.
{"title":"Local artificial body for radiation analysis and testing (LABRAT<sup>®</sup>): additive manufacturing and dosimetric measurements of a heterogeneous mouse model phantom for pre-clinical radiation research.","authors":"Shalaine S Tatu-Qassim, John Paul C Cabahug, Jose Bernardo L Padaca, Laureen Ida M Ballesteros, Ulysses B Ante, Earl John T Geraldo, Vladimir M Sarmiento, Carlos Emmanuel P Garcia, Eugene P Guevara, Jan Risty L Marzon, Mark Christian E Manuel, Chitho P Feliciano","doi":"10.1088/2057-1976/ae3b48","DOIUrl":"10.1088/2057-1976/ae3b48","url":null,"abstract":"<p><p><i>Purpose</i>. This study presents a novel method for fabricating a heterogeneous, tissue-equivalent mouse phantom model using additive manufacturing, together with dosimetric verification for applications in dosimetry for pre-clinical radiation research.<i>Methods</i>. Local Artificial Body for Radiation Analysis and Testing (LABRAT<sup>®</sup>) mouse phantoms were developed based on the Digimouse model. After 3D rendering, a mold-and-assemble method of additive manufacturing was done using 1:1.3 polyurethane-resin material for lung tissue, 1:1 resin-hardener mixture for soft tissue, and resin with 30% hydroxyapatite for bone. Three types of phantoms were developed: LABRAT A (full mouse), LABRAT B (with ionization chamber provision), and LABRAT C (with axial slices along the head, upper lung, lower lung, abdomen, and spine for film dosimetry). Ionization chamber measurements were performed on LABRAT B under total-body irradiation (TBI) (0.5-2.0 Gy) using 130 kVp, 5.0 mA x-rays at a 23 cm source-to-phantom distance on top of a 5 cm PMMA slab. Film calibration and 2.5 Gy TBI were also conducted on LABRAT C to obtain axial dose maps. Computed tomography (CT) images were obtained, and CT numbers of the phantoms were extracted using Slicer 5.4.0.<i>Results</i>. The fabrication method produced identical LABRAT<sup>®</sup>phantoms suitable for pre-clinical dosimetry. In the open field plan, the measured dose for the LABRAT B phantom inside the acrylic mouse restrainer was observed to agree by up to ±2.6% of the prescribed dose. Film images revealed the corresponding dose maps in each axial slice, which show gradients corresponding to doses of 0 to 3 Gy. Mean CT numbers were -621 ± 119 HU (lung), 70 ± 40 HU (soft tissue), and 430 ± 138 HU (bone).<i>Conclusion</i>. A heterogeneous mouse phantom was successfully developed and validated for dose verification in pre-clinical irradiation. LABRAT<sup>®</sup>materials demonstrated appropriate anatomical and radiological equivalence, with accurate dosimetric performance and good geometric agreement with the Digimouse model.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2026-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146017391","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Idiopathic pulmonary fibrosis significantly threatens patient survival and remains a condition with limited effective treatment options. There is an urgent need to expedite the exploration of idiopathic pulmonary fibrosis mechanisms and identify suitable therapeutic approaches. Non-invasive and rapid segmentation of lung tissue, coupled with fibrosis quantification, is essential for drug development and efficacy monitoring. In this study, 59 mice were divided into training, validation and test sets according to the ratio of 70%:15%:15%. Based on this ratio, we performed a six-fold cross-validation to ensure the reliability of our results and calculated the average performance across all test sets. At first, a 2.5D UNet was utilized to segment the lung tissue of mice, followed by the calculation of a fibrosis score based on the segmented output, which can be used to evaluate the degree of pulmonary fibrosis in mice. Dice score, precision and recall are used to evaluated the performance of 2.5D UNet. In the test set, the 2.5D UNet achieved an average Dice score of 0.938, precision of 0.941, and recall of 0.936 across the six-fold cross-validation. The fibrosis score effectively demonstrated the varying impacts of different modeling or treatment methods. The 2.5D UNet can effectively segment mice lung tissue and evaluate fibrosis scores, which lays a solid foundation for further research.
{"title":"Segmentation and calculation of lung fibrosis in IPF mice by 2.5D UNet.","authors":"Yuemei Zheng, Tingting Weng, Yueyue Chang, Sijing Ma, Jian Zhang, Li Guo","doi":"10.1088/2057-1976/ae38e5","DOIUrl":"10.1088/2057-1976/ae38e5","url":null,"abstract":"<p><p>Idiopathic pulmonary fibrosis significantly threatens patient survival and remains a condition with limited effective treatment options. There is an urgent need to expedite the exploration of idiopathic pulmonary fibrosis mechanisms and identify suitable therapeutic approaches. Non-invasive and rapid segmentation of lung tissue, coupled with fibrosis quantification, is essential for drug development and efficacy monitoring. In this study, 59 mice were divided into training, validation and test sets according to the ratio of 70%:15%:15%. Based on this ratio, we performed a six-fold cross-validation to ensure the reliability of our results and calculated the average performance across all test sets. At first, a 2.5D UNet was utilized to segment the lung tissue of mice, followed by the calculation of a fibrosis score based on the segmented output, which can be used to evaluate the degree of pulmonary fibrosis in mice. Dice score, precision and recall are used to evaluated the performance of 2.5D UNet. In the test set, the 2.5D UNet achieved an average Dice score of 0.938, precision of 0.941, and recall of 0.936 across the six-fold cross-validation. The fibrosis score effectively demonstrated the varying impacts of different modeling or treatment methods. The 2.5D UNet can effectively segment mice lung tissue and evaluate fibrosis scores, which lays a solid foundation for further research.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2026-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145984415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-04DOI: 10.1088/2057-1976/ae3d3e
Yanniklas Kravutske, Mateus A Esmeraldo, Eduardo P Reis, Stefanie Chambers, Lukas Haider, Gregor Kasprian, Bruno P Soares
Introduction.Focal cortical dysplasia type II (FCD II) is a significant cause of drug-resistant epilepsy, and the full surgical resection of the lesion is linked with excellent disease-free outcomes. Its imaging hallmark is the white matter hyperintense funnel-shaped transmantle sign on T2-FLAIR magnetic resonance imaging (MRI). Manual delineation of this abnormality is challenging and inconsistent. Most current artificial intelligence (AI) segmentation tools focus on cortical features and do not fully evaluate the white matter component. We tested whether integrating an algorithm trained on white matter lesions may improve FCD II segmentation.Methods.We evaluated the combination of two AI algorithms, MELD Graph (surface-based FCD segmentation) and MindGlide (whole-brain/white-matter lesion segmentation tool) in 49 FCD cases with a radiologically confirmed transmantle sign. Segmentation accuracy was assessed against expert manual annotations using the Dice similarity coefficient and segmentation volumes.Results.MELD Graph detected the lesion in 31 cases, 22 of which had the transmantle sign included in the expert lesion mask. Among these, MindGlide detected the transmantle sign in eight cases (36%). The mean added Dice score was 0.033 (95% CI, 0.013-0.056). Overall Dice values of MELD Graph were 0.321 and increased to 0.354 with the addition of MindGlide. It also contributed additional lesion volume in these eight cases, ranging from 0.028 to 4.18 cm3, with a mean added volume of 0.77 cm3.Discussion.Despite not being trained on FCD data, MindGlide, when combined with MELD Graph, provided a modest improvement in FCD II segmentation, including the deep white matter component of the lesion that is not captured by MELD Graph.Conclusion.These findings provide preliminary evidence supporting the consideration of a sequential cortical and white matter segmentation approach in FCD II, which may guide further epilepsy-specific AI model development.
{"title":"Comprehensive segmentation of focal cortical dysplasia by combining surface-based and whole-brain MRI deep learning algorithms: a proof-of-concept study.","authors":"Yanniklas Kravutske, Mateus A Esmeraldo, Eduardo P Reis, Stefanie Chambers, Lukas Haider, Gregor Kasprian, Bruno P Soares","doi":"10.1088/2057-1976/ae3d3e","DOIUrl":"10.1088/2057-1976/ae3d3e","url":null,"abstract":"<p><p><i>Introduction.</i>Focal cortical dysplasia type II (FCD II) is a significant cause of drug-resistant epilepsy, and the full surgical resection of the lesion is linked with excellent disease-free outcomes. Its imaging hallmark is the white matter hyperintense funnel-shaped transmantle sign on T2-FLAIR magnetic resonance imaging (MRI). Manual delineation of this abnormality is challenging and inconsistent. Most current artificial intelligence (AI) segmentation tools focus on cortical features and do not fully evaluate the white matter component. We tested whether integrating an algorithm trained on white matter lesions may improve FCD II segmentation.<i>Methods.</i>We evaluated the combination of two AI algorithms, MELD Graph (surface-based FCD segmentation) and MindGlide (whole-brain/white-matter lesion segmentation tool) in 49 FCD cases with a radiologically confirmed transmantle sign. Segmentation accuracy was assessed against expert manual annotations using the Dice similarity coefficient and segmentation volumes.<i>Results.</i>MELD Graph detected the lesion in 31 cases, 22 of which had the transmantle sign included in the expert lesion mask. Among these, MindGlide detected the transmantle sign in eight cases (36%). The mean added Dice score was 0.033 (95% CI, 0.013-0.056). Overall Dice values of MELD Graph were 0.321 and increased to 0.354 with the addition of MindGlide. It also contributed additional lesion volume in these eight cases, ranging from 0.028 to 4.18 cm<sup>3</sup>, with a mean added volume of 0.77 cm<sup>3</sup>.<i>Discussion.</i>Despite not being trained on FCD data, MindGlide, when combined with MELD Graph, provided a modest improvement in FCD II segmentation, including the deep white matter component of the lesion that is not captured by MELD Graph.<i>Conclusion.</i>These findings provide preliminary evidence supporting the consideration of a sequential cortical and white matter segmentation approach in FCD II, which may guide further epilepsy-specific AI model development.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2026-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146050014","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-04DOI: 10.1088/2057-1976/ae41c5
Jianfang Li, Fazhi Qi, Yakang Li, Juan Chen, Yijie Pu, Shengxiang Wang
Low-dose computed tomography (LDCT) is crucial for reducing radiation exposure in medical imaging, but it often yields noisy images with artifacts that compromise diagnostic accuracy. Recently, Transformer-based models have shown great potential for LDCT denoising by modeling long-range dependencies and global context. However, standard Transformers incur prohibitive computational costs when applied to high-resolution medical images. To address this challenge, we propose a novel pure Transformer architecture for LDCT image restoration, designed within a hierarchical U-Net framework. The core of our innovation is the integration of an agent attention mechanism into a variable shifted-window design. This agent attention module efficiently approximates global self-attention by using a small set of agent tokens to aggregate and broadcast global contextual information, thereby achieving a global receptive field with only linear computational complexity. By embedding this mechanism within a multi-scale U-Net structure, our model effectively captures both fine-grained local details and long-range structural dependencies without sacrificing computational efficiency. Comprehensive experiments on a public LDCT dataset demonstrate that our method achieves state-of-the-art performance, outperforming existing approaches in both quantitative metrics and qualitative visual comparisons.
{"title":"Unet-like Transformer with variable shifted windows for low dose CT denoising.","authors":"Jianfang Li, Fazhi Qi, Yakang Li, Juan Chen, Yijie Pu, Shengxiang Wang","doi":"10.1088/2057-1976/ae41c5","DOIUrl":"https://doi.org/10.1088/2057-1976/ae41c5","url":null,"abstract":"<p><p>Low-dose computed tomography (LDCT) is crucial for reducing radiation exposure in medical imaging, but it often yields noisy images with artifacts that compromise diagnostic accuracy. Recently, Transformer-based models have shown great potential for LDCT denoising by modeling long-range dependencies and global context. However, standard Transformers incur prohibitive computational costs when applied to high-resolution medical images. To address this challenge, we propose a novel pure Transformer architecture for LDCT image restoration, designed within a hierarchical U-Net framework. The core of our innovation is the integration of an agent attention mechanism into a variable shifted-window design. This agent attention module efficiently approximates global self-attention by using a small set of agent tokens to aggregate and broadcast global contextual information, thereby achieving a global receptive field with only linear computational complexity. By embedding this mechanism within a multi-scale U-Net structure, our model effectively captures both fine-grained local details and long-range structural dependencies without sacrificing computational efficiency. Comprehensive experiments on a public LDCT dataset demonstrate that our method achieves state-of-the-art performance, outperforming existing approaches in both quantitative metrics and qualitative visual comparisons.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2026-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146117669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-04DOI: 10.1088/2057-1976/ae3966
Jiayu Lin, Liwen Zou, Yiming Gao, Liang Mao, Ziwei Nie
Accurate and automatic registration of the pancreas between contrast-enhanced CT (CECT) and non-contrast CT (NCCT) images is crucial for diagnosing and treating pancreatic cancer. However, existing deep learning-based methods remain limited due to inherent intensity differences between modalities, which impair intensity-based similarity metrics, and the pancreas's small size, vague boundaries, and complex surroundings, which trap segmentation-based metrics in local optima. To address these challenges, we propose a weakly supervised registration framework incorporating a novel mixed loss function. This loss leverages Wasserstein distance to enforce anatomical topology consistency in 3D pancreas registration between CECT and NCCT. We employ distance transforms to build the small, uncertain and complex anatomical topology distribution of the pancreas. Unlike conventional voxel-wiseL1orL2loss, the Wasserstein distance directly measures the similarity between warped and fixed anatomical topologies of pancreas. Experiments on a dataset of 975 paired CECT-NCCT images from patients with seven pancreatic tumor types (PDAC, IPMN, MCN, SCN, SPT, CP, PNET), demonstrate that our method outperforms state-of-the-art weakly supervised approaches, achieving improvements of 3.2% in Dice score, reductions of 28.54% in false positive segmentation rate with 0.89% in Hausdorff distance. The source code will be made publicly available athttps://github.com/ZouLiwen-1999/WSMorph.
{"title":"Learning the anatomical topology consistency driven by Wasserstein distance for weakly supervised 3D pancreas registration in multi-phase CT images.","authors":"Jiayu Lin, Liwen Zou, Yiming Gao, Liang Mao, Ziwei Nie","doi":"10.1088/2057-1976/ae3966","DOIUrl":"10.1088/2057-1976/ae3966","url":null,"abstract":"<p><p>Accurate and automatic registration of the pancreas between contrast-enhanced CT (CECT) and non-contrast CT (NCCT) images is crucial for diagnosing and treating pancreatic cancer. However, existing deep learning-based methods remain limited due to inherent intensity differences between modalities, which impair intensity-based similarity metrics, and the pancreas's small size, vague boundaries, and complex surroundings, which trap segmentation-based metrics in local optima. To address these challenges, we propose a weakly supervised registration framework incorporating a novel mixed loss function. This loss leverages Wasserstein distance to enforce anatomical topology consistency in 3D pancreas registration between CECT and NCCT. We employ distance transforms to build the small, uncertain and complex anatomical topology distribution of the pancreas. Unlike conventional voxel-wise<i>L</i><sub>1</sub>or<i>L</i><sub>2</sub>loss, the Wasserstein distance directly measures the similarity between warped and fixed anatomical topologies of pancreas. Experiments on a dataset of 975 paired CECT-NCCT images from patients with seven pancreatic tumor types (PDAC, IPMN, MCN, SCN, SPT, CP, PNET), demonstrate that our method outperforms state-of-the-art weakly supervised approaches, achieving improvements of 3.2% in Dice score, reductions of 28.54% in false positive segmentation rate with 0.89% in Hausdorff distance. The source code will be made publicly available athttps://github.com/ZouLiwen-1999/WSMorph.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2026-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145987901","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-04DOI: 10.1088/2057-1976/ae3b44
Jeremy S Bredfeldt, Arianna Liles, Yue-Houng Hu, Dianne Ferguson, Christian Guthier, David Hu, Scott Friesen, Kolade Agboola, John Whitaker, Hubert Cochet, Usha Tedrow, Ray Mak, Kelly Fitzgerald
Background and purpose. To determine the interobserver variability in registrations of cardiac computed tomography (CT) images and to assess the margins needed to account for the observed variability in the context of stereotactic arrhythmia radioablation (STAR).Materials and methods.STAR targets were delineated on cardiac CTs for fifteen consecutive patients. Ten expert observers were asked to rigidly register the cardiac CT images to corresponding planning CT images. Registrations all started with a fully automated registration step, followed by manual adjustments. The targets were transferred from cardiac to planning CT using each of the registrations along with one consensus registration for each patient. The margin needed for the consensus target to encompass each of the observer and fully automated targets was measured.Results.A total of 150 registrations were evaluated for this study. Manual registrations required an average (standard deviation) of 5 min, 55 s (2 min, 10 s) to perform. The automated registration, without manual intervention, required an expansion of 6 mm to achieve 95% overlap for 97% of patients. For the manual registrations, an expansion of 4 mm achieved 95% overlap for 97% of the patients and observers. The remaining 3% required expansions from 4 to 9 mm. An expansion of 3 mm achieved 95% overlap in 88% of the cases. Some patients required larger expansions compared to others and small target volume was common among these more difficult cases. Neither breath-hold nor target position were observed to impact variability among observers. Some of the observers required larger expansions compared to others and those requiring the largest margins were not the same from patient to patient.Conclusion.Registration of cardiac CT to the planning CT contributed approximately 3 mm of uncertainty to the STAR targeting process. Accordingly, workflows in which target delineation is performed on cardiac CT should explicitly account for this uncertainty in the overall target margin assessment.
{"title":"Interobserver image registration variability impacts on stereotactic arrhythmia radioablation (STAR) target margins.","authors":"Jeremy S Bredfeldt, Arianna Liles, Yue-Houng Hu, Dianne Ferguson, Christian Guthier, David Hu, Scott Friesen, Kolade Agboola, John Whitaker, Hubert Cochet, Usha Tedrow, Ray Mak, Kelly Fitzgerald","doi":"10.1088/2057-1976/ae3b44","DOIUrl":"10.1088/2057-1976/ae3b44","url":null,"abstract":"<p><p><i>Background and purpose</i>. To determine the interobserver variability in registrations of cardiac computed tomography (CT) images and to assess the margins needed to account for the observed variability in the context of stereotactic arrhythmia radioablation (STAR).<i>Materials and methods.</i>STAR targets were delineated on cardiac CTs for fifteen consecutive patients. Ten expert observers were asked to rigidly register the cardiac CT images to corresponding planning CT images. Registrations all started with a fully automated registration step, followed by manual adjustments. The targets were transferred from cardiac to planning CT using each of the registrations along with one consensus registration for each patient. The margin needed for the consensus target to encompass each of the observer and fully automated targets was measured.<i>Results.</i>A total of 150 registrations were evaluated for this study. Manual registrations required an average (standard deviation) of 5 min, 55 s (2 min, 10 s) to perform. The automated registration, without manual intervention, required an expansion of 6 mm to achieve 95% overlap for 97% of patients. For the manual registrations, an expansion of 4 mm achieved 95% overlap for 97% of the patients and observers. The remaining 3% required expansions from 4 to 9 mm. An expansion of 3 mm achieved 95% overlap in 88% of the cases. Some patients required larger expansions compared to others and small target volume was common among these more difficult cases. Neither breath-hold nor target position were observed to impact variability among observers. Some of the observers required larger expansions compared to others and those requiring the largest margins were not the same from patient to patient.<i>Conclusion.</i>Registration of cardiac CT to the planning CT contributed approximately 3 mm of uncertainty to the STAR targeting process. Accordingly, workflows in which target delineation is performed on cardiac CT should explicitly account for this uncertainty in the overall target margin assessment.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2026-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146017396","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-03DOI: 10.1088/2057-1976/ae4105
Joshua Dugdale, Garrett Scott Black, Jordan Alexander Borrell
Functional near-infrared spectroscopy (fNIRS) is a portable, non-invasive brain imaging method with growing applications in neurorehabilitation. However, signal variability, driven in part by differences in data processing pipelines, remains a major barrier to its clinical adoption. This study compares the robustness of two common processing approaches, General Linear Model (GLM) and Block Averaging (BA), in detecting cortical activation across task complexities. Eighteen neurotypical, healthy adults completed a simple hand grasp task and a more complex gross manual dexterity task while fNIRS data were recorded and analyzed using the BA and GLM pipelines. Results revealed significant effects of both pipeline and task complexity on oxygenated and deoxygenated hemoglobin amplitudes. BA produced significantly larger responses than GLM, and complex tasks elicited significantly greater activation than simple tasks. Notably, only the BA-Complex subgroup showed significant differences from all other conditions, suggesting BA more effectively detects task-related hemodynamic changes. These findings emphasize the need for careful analysis pipeline selection to reduce variability and enhance fNIRS reliability in neurorehabilitation research.
{"title":"Investigating Functional Near-Infrared Spectroscopy Signal Variability: The Role of Processing Pipelines and Task Complexity.","authors":"Joshua Dugdale, Garrett Scott Black, Jordan Alexander Borrell","doi":"10.1088/2057-1976/ae4105","DOIUrl":"https://doi.org/10.1088/2057-1976/ae4105","url":null,"abstract":"<p><p>Functional near-infrared spectroscopy (fNIRS) is a portable, non-invasive brain imaging method with growing applications in neurorehabilitation. However, signal variability, driven in part by differences in data processing pipelines, remains a major barrier to its clinical adoption. This study compares the robustness of two common processing approaches, General Linear Model (GLM) and Block Averaging (BA), in detecting cortical activation across task complexities. Eighteen neurotypical, healthy adults completed a simple hand grasp task and a more complex gross manual dexterity task while fNIRS data were recorded and analyzed using the BA and GLM pipelines. Results revealed significant effects of both pipeline and task complexity on oxygenated and deoxygenated hemoglobin amplitudes. BA produced significantly larger responses than GLM, and complex tasks elicited significantly greater activation than simple tasks. Notably, only the BA-Complex subgroup showed significant differences from all other conditions, suggesting BA more effectively detects task-related hemodynamic changes. These findings emphasize the need for careful analysis pipeline selection to reduce variability and enhance fNIRS reliability in neurorehabilitation research.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2026-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146112187","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-03DOI: 10.1088/2057-1976/ae3b45
Jason Leung, Ledycnarf J Holanda, Laura Wheeler, Tom Chau
In-ear electroencephalography (EEG) systems offer several practical advantages over scalp-based EEG systems for non-invasive brain-computer interface (BCI) applications. However, the difficulty in fabricating in-ear EEG systems can limit their accessibility for BCI use cases. In this study, we developed a portable, low-cost wireless in-ear EEG device using commercially available components. In-ear EEG signals (referenced to left mastoid) from 5 adolescent participants were compared to scalp-EEG collected simultaneously during an alpha modulation task, various artifact induction tasks, and an auditory word-streaming BCI paradigm. Spectral analysis confirmed that the proposed in-ear EEG system could capture significantly increased alpha activity during eyes-closed relaxation in 3 of 5 participants, with a signal-to-noise ratio of 2.34 across all participants. In-ear EEG signals were most susceptible to horizontal head movement, coughing and vocalization artifacts but were relatively insensitive to ocular artifacts such as blinking. For the auditory streaming paradigm, the classifier decoded the presented stimuli from in-ear EEG signals only in 1 of 5 participants. Classification of the attended stream did not exceed chance levels. Contrast plots showing the difference between attended and unattended streams revealed reduced amplitudes of in-ear EEG responses relative to scalp-EEG responses. Hardware modifications are needed to amplify in-ear signals and measure electrode-skin impedances to improve the viability of in-ear EEG for BCI applications.
{"title":"Wireless in-ear EEG system for auditory brain-computer interface applications in adolescents.","authors":"Jason Leung, Ledycnarf J Holanda, Laura Wheeler, Tom Chau","doi":"10.1088/2057-1976/ae3b45","DOIUrl":"https://doi.org/10.1088/2057-1976/ae3b45","url":null,"abstract":"<p><p>In-ear electroencephalography (EEG) systems offer several practical advantages over scalp-based EEG systems for non-invasive brain-computer interface (BCI) applications. However, the difficulty in fabricating in-ear EEG systems can limit their accessibility for BCI use cases. In this study, we developed a portable, low-cost wireless in-ear EEG device using commercially available components. In-ear EEG signals (referenced to left mastoid) from 5 adolescent participants were compared to scalp-EEG collected simultaneously during an alpha modulation task, various artifact induction tasks, and an auditory word-streaming BCI paradigm. Spectral analysis confirmed that the proposed in-ear EEG system could capture significantly increased alpha activity during eyes-closed relaxation in 3 of 5 participants, with a signal-to-noise ratio of 2.34 across all participants. In-ear EEG signals were most susceptible to horizontal head movement, coughing and vocalization artifacts but were relatively insensitive to ocular artifacts such as blinking. For the auditory streaming paradigm, the classifier decoded the presented stimuli from in-ear EEG signals only in 1 of 5 participants. Classification of the attended stream did not exceed chance levels. Contrast plots showing the difference between attended and unattended streams revealed reduced amplitudes of in-ear EEG responses relative to scalp-EEG responses. Hardware modifications are needed to amplify in-ear signals and measure electrode-skin impedances to improve the viability of in-ear EEG for BCI applications.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":"12 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2026-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146112152","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}