Pub Date : 2026-02-05DOI: 10.1088/2057-1976/ae3e95
Mohammed Razzaq Mohammed
Polycaprolactone (PCL), chitosan (CS), and hydroxyapatite (HA) have emerged as complementary biomaterials for the design of advanced scaffolds in tissue engineering (TE). Individually, PCL offers excellent mechanical strength and formability but suffers from hydrophobicity and slow degradation. CS provides biocompatibility, antibacterial properties, and favorable cell-material interactions, yet its insufficient mechanical stability limits standalone use. HA, a bioactive ceramic, enhances osteoconductivity; nevertheless, it is brittle in pure form. Recent advances focus on integrating these three components into hybrid composites to harness their desired characteristics. Novel fabrication approaches, including electrospinning and 3D printing have been optimized to tailor scaffold architecture, porosity, and mechanical integrity. Studies highlight enhanced cellular adhesion and differentiation, as well as improved angiogenic and antibacterial performance when functionalized with bioactive agents or nanoparticles. For instance, the incorporation of nano-HA into the PCL/CS scaffolds markedly boosted skin fibroblast cells (HSF 1184) proliferation, yielding a 23% increase compared to PCL/CS scaffolds by day 3. Besides, HA-PCL/CS nanofibrous composite scaffolds demonstrated a marked improvement in mechanical stiffness, showing an increase of greater than 15% in modulus of elasticity compared to the PCL/CS scaffold. Despite these advances, challenges remain in achieving controlled degradation, uniform dispersion of components, and scalable, reproducible fabrication for clinical translation. This current review fills a critical gap by providing the first comprehensive analysis of advancements in PCL-CS-HA ternary TE systems, an area that remains unexplored despite existing reviews on individual materials and their binary combinations. It analyzes latest developments in PCL-CS-HA composites, highlighting their structure, characteristics, processing strategies, biological outcomes, and future directions.
{"title":"Emerging innovations in polycaprolactone-chitosan-hydroxyapatite composite scaffolds for tissue engineering: a review.","authors":"Mohammed Razzaq Mohammed","doi":"10.1088/2057-1976/ae3e95","DOIUrl":"https://doi.org/10.1088/2057-1976/ae3e95","url":null,"abstract":"<p><p>Polycaprolactone (PCL), chitosan (CS), and hydroxyapatite (HA) have emerged as complementary biomaterials for the design of advanced scaffolds in tissue engineering (TE). Individually, PCL offers excellent mechanical strength and formability but suffers from hydrophobicity and slow degradation. CS provides biocompatibility, antibacterial properties, and favorable cell-material interactions, yet its insufficient mechanical stability limits standalone use. HA, a bioactive ceramic, enhances osteoconductivity; nevertheless, it is brittle in pure form. Recent advances focus on integrating these three components into hybrid composites to harness their desired characteristics. Novel fabrication approaches, including electrospinning and 3D printing have been optimized to tailor scaffold architecture, porosity, and mechanical integrity. Studies highlight enhanced cellular adhesion and differentiation, as well as improved angiogenic and antibacterial performance when functionalized with bioactive agents or nanoparticles. For instance, the incorporation of nano-HA into the PCL/CS scaffolds markedly boosted skin fibroblast cells (HSF 1184) proliferation, yielding a 23% increase compared to PCL/CS scaffolds by day 3. Besides, HA-PCL/CS nanofibrous composite scaffolds demonstrated a marked improvement in mechanical stiffness, showing an increase of greater than 15% in modulus of elasticity compared to the PCL/CS scaffold. Despite these advances, challenges remain in achieving controlled degradation, uniform dispersion of components, and scalable, reproducible fabrication for clinical translation. This current review fills a critical gap by providing the first comprehensive analysis of advancements in PCL-CS-HA ternary TE systems, an area that remains unexplored despite existing reviews on individual materials and their binary combinations. It analyzes latest developments in PCL-CS-HA composites, highlighting their structure, characteristics, processing strategies, biological outcomes, and future directions.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":"12 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2026-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146123588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-04DOI: 10.1088/2057-1976/ae3b47
Yingzhu Wang, Liang Zhang, Yuping Yan
Low-Dose Computed Tomography (LDCT) reduces radiation risk but introduces high noise levels that compromises diagnostic quality. To address this, we propose a Hybrid Generalized Efficient Layer Aggregation Network-UNet (GELAN-UNet) model, which incorporates medical priors into a progressive modular architecture. This design uses medically enhanced modules in shallower layers to capture fine details and computationally efficient blocks in deeper layers to reduce cost. Key innovations include a novel low-frequency retention path and an edge-aware attention mechanism, both crucial for preserving critical diagnostic structures. Evaluated on the public Mayo Clinic dataset, the proposed method achieves a superior peak signal-to-noise ratio (PSNR) of 45.28 dB - a 12.45% improvement over the original LDCT - while maintaining an optimal balance between denoising performance and computational efficiency. The critical importance of the low-frequency path, as revealed by ablation studies, validates the rationality of the hybrid strategy, which is further supported by comparisons with full medical and frequency-aware variants. This work delivers a high-performance denoising model alongside a practical, efficient architectural paradigm - rigorously validated through systematic exploration - for integrating domain-specific medical knowledge into deep learning frameworks.
{"title":"Hybrid GELAN-UNet: integrating medical priors for low-dose CT denoising.","authors":"Yingzhu Wang, Liang Zhang, Yuping Yan","doi":"10.1088/2057-1976/ae3b47","DOIUrl":"10.1088/2057-1976/ae3b47","url":null,"abstract":"<p><p>Low-Dose Computed Tomography (LDCT) reduces radiation risk but introduces high noise levels that compromises diagnostic quality. To address this, we propose a Hybrid Generalized Efficient Layer Aggregation Network-UNet (GELAN-UNet) model, which incorporates medical priors into a progressive modular architecture. This design uses medically enhanced modules in shallower layers to capture fine details and computationally efficient blocks in deeper layers to reduce cost. Key innovations include a novel low-frequency retention path and an edge-aware attention mechanism, both crucial for preserving critical diagnostic structures. Evaluated on the public Mayo Clinic dataset, the proposed method achieves a superior peak signal-to-noise ratio (PSNR) of 45.28 dB - a 12.45% improvement over the original LDCT - while maintaining an optimal balance between denoising performance and computational efficiency. The critical importance of the low-frequency path, as revealed by ablation studies, validates the rationality of the hybrid strategy, which is further supported by comparisons with full medical and frequency-aware variants. This work delivers a high-performance denoising model alongside a practical, efficient architectural paradigm - rigorously validated through systematic exploration - for integrating domain-specific medical knowledge into deep learning frameworks.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2026-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146017369","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-04DOI: 10.1088/2057-1976/ae3b48
Shalaine S Tatu-Qassim, John Paul C Cabahug, Jose Bernardo L Padaca, Laureen Ida M Ballesteros, Ulysses B Ante, Earl John T Geraldo, Vladimir M Sarmiento, Carlos Emmanuel P Garcia, Eugene P Guevara, Jan Risty L Marzon, Mark Christian E Manuel, Chitho P Feliciano
Purpose. This study presents a novel method for fabricating a heterogeneous, tissue-equivalent mouse phantom model using additive manufacturing, together with dosimetric verification for applications in dosimetry for pre-clinical radiation research.Methods. Local Artificial Body for Radiation Analysis and Testing (LABRAT®) mouse phantoms were developed based on the Digimouse model. After 3D rendering, a mold-and-assemble method of additive manufacturing was done using 1:1.3 polyurethane-resin material for lung tissue, 1:1 resin-hardener mixture for soft tissue, and resin with 30% hydroxyapatite for bone. Three types of phantoms were developed: LABRAT A (full mouse), LABRAT B (with ionization chamber provision), and LABRAT C (with axial slices along the head, upper lung, lower lung, abdomen, and spine for film dosimetry). Ionization chamber measurements were performed on LABRAT B under total-body irradiation (TBI) (0.5-2.0 Gy) using 130 kVp, 5.0 mA x-rays at a 23 cm source-to-phantom distance on top of a 5 cm PMMA slab. Film calibration and 2.5 Gy TBI were also conducted on LABRAT C to obtain axial dose maps. Computed tomography (CT) images were obtained, and CT numbers of the phantoms were extracted using Slicer 5.4.0.Results. The fabrication method produced identical LABRAT®phantoms suitable for pre-clinical dosimetry. In the open field plan, the measured dose for the LABRAT B phantom inside the acrylic mouse restrainer was observed to agree by up to ±2.6% of the prescribed dose. Film images revealed the corresponding dose maps in each axial slice, which show gradients corresponding to doses of 0 to 3 Gy. Mean CT numbers were -621 ± 119 HU (lung), 70 ± 40 HU (soft tissue), and 430 ± 138 HU (bone).Conclusion. A heterogeneous mouse phantom was successfully developed and validated for dose verification in pre-clinical irradiation. LABRAT®materials demonstrated appropriate anatomical and radiological equivalence, with accurate dosimetric performance and good geometric agreement with the Digimouse model.
{"title":"Local artificial body for radiation analysis and testing (LABRAT<sup>®</sup>): additive manufacturing and dosimetric measurements of a heterogeneous mouse model phantom for pre-clinical radiation research.","authors":"Shalaine S Tatu-Qassim, John Paul C Cabahug, Jose Bernardo L Padaca, Laureen Ida M Ballesteros, Ulysses B Ante, Earl John T Geraldo, Vladimir M Sarmiento, Carlos Emmanuel P Garcia, Eugene P Guevara, Jan Risty L Marzon, Mark Christian E Manuel, Chitho P Feliciano","doi":"10.1088/2057-1976/ae3b48","DOIUrl":"10.1088/2057-1976/ae3b48","url":null,"abstract":"<p><p><i>Purpose</i>. This study presents a novel method for fabricating a heterogeneous, tissue-equivalent mouse phantom model using additive manufacturing, together with dosimetric verification for applications in dosimetry for pre-clinical radiation research.<i>Methods</i>. Local Artificial Body for Radiation Analysis and Testing (LABRAT<sup>®</sup>) mouse phantoms were developed based on the Digimouse model. After 3D rendering, a mold-and-assemble method of additive manufacturing was done using 1:1.3 polyurethane-resin material for lung tissue, 1:1 resin-hardener mixture for soft tissue, and resin with 30% hydroxyapatite for bone. Three types of phantoms were developed: LABRAT A (full mouse), LABRAT B (with ionization chamber provision), and LABRAT C (with axial slices along the head, upper lung, lower lung, abdomen, and spine for film dosimetry). Ionization chamber measurements were performed on LABRAT B under total-body irradiation (TBI) (0.5-2.0 Gy) using 130 kVp, 5.0 mA x-rays at a 23 cm source-to-phantom distance on top of a 5 cm PMMA slab. Film calibration and 2.5 Gy TBI were also conducted on LABRAT C to obtain axial dose maps. Computed tomography (CT) images were obtained, and CT numbers of the phantoms were extracted using Slicer 5.4.0.<i>Results</i>. The fabrication method produced identical LABRAT<sup>®</sup>phantoms suitable for pre-clinical dosimetry. In the open field plan, the measured dose for the LABRAT B phantom inside the acrylic mouse restrainer was observed to agree by up to ±2.6% of the prescribed dose. Film images revealed the corresponding dose maps in each axial slice, which show gradients corresponding to doses of 0 to 3 Gy. Mean CT numbers were -621 ± 119 HU (lung), 70 ± 40 HU (soft tissue), and 430 ± 138 HU (bone).<i>Conclusion</i>. A heterogeneous mouse phantom was successfully developed and validated for dose verification in pre-clinical irradiation. LABRAT<sup>®</sup>materials demonstrated appropriate anatomical and radiological equivalence, with accurate dosimetric performance and good geometric agreement with the Digimouse model.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2026-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146017391","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Idiopathic pulmonary fibrosis significantly threatens patient survival and remains a condition with limited effective treatment options. There is an urgent need to expedite the exploration of idiopathic pulmonary fibrosis mechanisms and identify suitable therapeutic approaches. Non-invasive and rapid segmentation of lung tissue, coupled with fibrosis quantification, is essential for drug development and efficacy monitoring. In this study, 59 mice were divided into training, validation and test sets according to the ratio of 70%:15%:15%. Based on this ratio, we performed a six-fold cross-validation to ensure the reliability of our results and calculated the average performance across all test sets. At first, a 2.5D UNet was utilized to segment the lung tissue of mice, followed by the calculation of a fibrosis score based on the segmented output, which can be used to evaluate the degree of pulmonary fibrosis in mice. Dice score, precision and recall are used to evaluated the performance of 2.5D UNet. In the test set, the 2.5D UNet achieved an average Dice score of 0.938, precision of 0.941, and recall of 0.936 across the six-fold cross-validation. The fibrosis score effectively demonstrated the varying impacts of different modeling or treatment methods. The 2.5D UNet can effectively segment mice lung tissue and evaluate fibrosis scores, which lays a solid foundation for further research.
{"title":"Segmentation and calculation of lung fibrosis in IPF mice by 2.5D UNet.","authors":"Yuemei Zheng, Tingting Weng, Yueyue Chang, Sijing Ma, Jian Zhang, Li Guo","doi":"10.1088/2057-1976/ae38e5","DOIUrl":"10.1088/2057-1976/ae38e5","url":null,"abstract":"<p><p>Idiopathic pulmonary fibrosis significantly threatens patient survival and remains a condition with limited effective treatment options. There is an urgent need to expedite the exploration of idiopathic pulmonary fibrosis mechanisms and identify suitable therapeutic approaches. Non-invasive and rapid segmentation of lung tissue, coupled with fibrosis quantification, is essential for drug development and efficacy monitoring. In this study, 59 mice were divided into training, validation and test sets according to the ratio of 70%:15%:15%. Based on this ratio, we performed a six-fold cross-validation to ensure the reliability of our results and calculated the average performance across all test sets. At first, a 2.5D UNet was utilized to segment the lung tissue of mice, followed by the calculation of a fibrosis score based on the segmented output, which can be used to evaluate the degree of pulmonary fibrosis in mice. Dice score, precision and recall are used to evaluated the performance of 2.5D UNet. In the test set, the 2.5D UNet achieved an average Dice score of 0.938, precision of 0.941, and recall of 0.936 across the six-fold cross-validation. The fibrosis score effectively demonstrated the varying impacts of different modeling or treatment methods. The 2.5D UNet can effectively segment mice lung tissue and evaluate fibrosis scores, which lays a solid foundation for further research.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2026-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145984415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-04DOI: 10.1088/2057-1976/ae3d3e
Yanniklas Kravutske, Mateus A Esmeraldo, Eduardo P Reis, Stefanie Chambers, Lukas Haider, Gregor Kasprian, Bruno P Soares
Introduction.Focal cortical dysplasia type II (FCD II) is a significant cause of drug-resistant epilepsy, and the full surgical resection of the lesion is linked with excellent disease-free outcomes. Its imaging hallmark is the white matter hyperintense funnel-shaped transmantle sign on T2-FLAIR magnetic resonance imaging (MRI). Manual delineation of this abnormality is challenging and inconsistent. Most current artificial intelligence (AI) segmentation tools focus on cortical features and do not fully evaluate the white matter component. We tested whether integrating an algorithm trained on white matter lesions may improve FCD II segmentation.Methods.We evaluated the combination of two AI algorithms, MELD Graph (surface-based FCD segmentation) and MindGlide (whole-brain/white-matter lesion segmentation tool) in 49 FCD cases with a radiologically confirmed transmantle sign. Segmentation accuracy was assessed against expert manual annotations using the Dice similarity coefficient and segmentation volumes.Results.MELD Graph detected the lesion in 31 cases, 22 of which had the transmantle sign included in the expert lesion mask. Among these, MindGlide detected the transmantle sign in eight cases (36%). The mean added Dice score was 0.033 (95% CI, 0.013-0.056). Overall Dice values of MELD Graph were 0.321 and increased to 0.354 with the addition of MindGlide. It also contributed additional lesion volume in these eight cases, ranging from 0.028 to 4.18 cm3, with a mean added volume of 0.77 cm3.Discussion.Despite not being trained on FCD data, MindGlide, when combined with MELD Graph, provided a modest improvement in FCD II segmentation, including the deep white matter component of the lesion that is not captured by MELD Graph.Conclusion.These findings provide preliminary evidence supporting the consideration of a sequential cortical and white matter segmentation approach in FCD II, which may guide further epilepsy-specific AI model development.
{"title":"Comprehensive segmentation of focal cortical dysplasia by combining surface-based and whole-brain MRI deep learning algorithms: a proof-of-concept study.","authors":"Yanniklas Kravutske, Mateus A Esmeraldo, Eduardo P Reis, Stefanie Chambers, Lukas Haider, Gregor Kasprian, Bruno P Soares","doi":"10.1088/2057-1976/ae3d3e","DOIUrl":"10.1088/2057-1976/ae3d3e","url":null,"abstract":"<p><p><i>Introduction.</i>Focal cortical dysplasia type II (FCD II) is a significant cause of drug-resistant epilepsy, and the full surgical resection of the lesion is linked with excellent disease-free outcomes. Its imaging hallmark is the white matter hyperintense funnel-shaped transmantle sign on T2-FLAIR magnetic resonance imaging (MRI). Manual delineation of this abnormality is challenging and inconsistent. Most current artificial intelligence (AI) segmentation tools focus on cortical features and do not fully evaluate the white matter component. We tested whether integrating an algorithm trained on white matter lesions may improve FCD II segmentation.<i>Methods.</i>We evaluated the combination of two AI algorithms, MELD Graph (surface-based FCD segmentation) and MindGlide (whole-brain/white-matter lesion segmentation tool) in 49 FCD cases with a radiologically confirmed transmantle sign. Segmentation accuracy was assessed against expert manual annotations using the Dice similarity coefficient and segmentation volumes.<i>Results.</i>MELD Graph detected the lesion in 31 cases, 22 of which had the transmantle sign included in the expert lesion mask. Among these, MindGlide detected the transmantle sign in eight cases (36%). The mean added Dice score was 0.033 (95% CI, 0.013-0.056). Overall Dice values of MELD Graph were 0.321 and increased to 0.354 with the addition of MindGlide. It also contributed additional lesion volume in these eight cases, ranging from 0.028 to 4.18 cm<sup>3</sup>, with a mean added volume of 0.77 cm<sup>3</sup>.<i>Discussion.</i>Despite not being trained on FCD data, MindGlide, when combined with MELD Graph, provided a modest improvement in FCD II segmentation, including the deep white matter component of the lesion that is not captured by MELD Graph.<i>Conclusion.</i>These findings provide preliminary evidence supporting the consideration of a sequential cortical and white matter segmentation approach in FCD II, which may guide further epilepsy-specific AI model development.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2026-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146050014","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-04DOI: 10.1088/2057-1976/ae3966
Jiayu Lin, Liwen Zou, Yiming Gao, Liang Mao, Ziwei Nie
Accurate and automatic registration of the pancreas between contrast-enhanced CT (CECT) and non-contrast CT (NCCT) images is crucial for diagnosing and treating pancreatic cancer. However, existing deep learning-based methods remain limited due to inherent intensity differences between modalities, which impair intensity-based similarity metrics, and the pancreas's small size, vague boundaries, and complex surroundings, which trap segmentation-based metrics in local optima. To address these challenges, we propose a weakly supervised registration framework incorporating a novel mixed loss function. This loss leverages Wasserstein distance to enforce anatomical topology consistency in 3D pancreas registration between CECT and NCCT. We employ distance transforms to build the small, uncertain and complex anatomical topology distribution of the pancreas. Unlike conventional voxel-wiseL1orL2loss, the Wasserstein distance directly measures the similarity between warped and fixed anatomical topologies of pancreas. Experiments on a dataset of 975 paired CECT-NCCT images from patients with seven pancreatic tumor types (PDAC, IPMN, MCN, SCN, SPT, CP, PNET), demonstrate that our method outperforms state-of-the-art weakly supervised approaches, achieving improvements of 3.2% in Dice score, reductions of 28.54% in false positive segmentation rate with 0.89% in Hausdorff distance. The source code will be made publicly available athttps://github.com/ZouLiwen-1999/WSMorph.
{"title":"Learning the anatomical topology consistency driven by Wasserstein distance for weakly supervised 3D pancreas registration in multi-phase CT images.","authors":"Jiayu Lin, Liwen Zou, Yiming Gao, Liang Mao, Ziwei Nie","doi":"10.1088/2057-1976/ae3966","DOIUrl":"10.1088/2057-1976/ae3966","url":null,"abstract":"<p><p>Accurate and automatic registration of the pancreas between contrast-enhanced CT (CECT) and non-contrast CT (NCCT) images is crucial for diagnosing and treating pancreatic cancer. However, existing deep learning-based methods remain limited due to inherent intensity differences between modalities, which impair intensity-based similarity metrics, and the pancreas's small size, vague boundaries, and complex surroundings, which trap segmentation-based metrics in local optima. To address these challenges, we propose a weakly supervised registration framework incorporating a novel mixed loss function. This loss leverages Wasserstein distance to enforce anatomical topology consistency in 3D pancreas registration between CECT and NCCT. We employ distance transforms to build the small, uncertain and complex anatomical topology distribution of the pancreas. Unlike conventional voxel-wise<i>L</i><sub>1</sub>or<i>L</i><sub>2</sub>loss, the Wasserstein distance directly measures the similarity between warped and fixed anatomical topologies of pancreas. Experiments on a dataset of 975 paired CECT-NCCT images from patients with seven pancreatic tumor types (PDAC, IPMN, MCN, SCN, SPT, CP, PNET), demonstrate that our method outperforms state-of-the-art weakly supervised approaches, achieving improvements of 3.2% in Dice score, reductions of 28.54% in false positive segmentation rate with 0.89% in Hausdorff distance. The source code will be made publicly available athttps://github.com/ZouLiwen-1999/WSMorph.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2026-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145987901","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-04DOI: 10.1088/2057-1976/ae3b44
Jeremy S Bredfeldt, Arianna Liles, Yue-Houng Hu, Dianne Ferguson, Christian Guthier, David Hu, Scott Friesen, Kolade Agboola, John Whitaker, Hubert Cochet, Usha Tedrow, Ray Mak, Kelly Fitzgerald
Background and purpose. To determine the interobserver variability in registrations of cardiac computed tomography (CT) images and to assess the margins needed to account for the observed variability in the context of stereotactic arrhythmia radioablation (STAR).Materials and methods.STAR targets were delineated on cardiac CTs for fifteen consecutive patients. Ten expert observers were asked to rigidly register the cardiac CT images to corresponding planning CT images. Registrations all started with a fully automated registration step, followed by manual adjustments. The targets were transferred from cardiac to planning CT using each of the registrations along with one consensus registration for each patient. The margin needed for the consensus target to encompass each of the observer and fully automated targets was measured.Results.A total of 150 registrations were evaluated for this study. Manual registrations required an average (standard deviation) of 5 min, 55 s (2 min, 10 s) to perform. The automated registration, without manual intervention, required an expansion of 6 mm to achieve 95% overlap for 97% of patients. For the manual registrations, an expansion of 4 mm achieved 95% overlap for 97% of the patients and observers. The remaining 3% required expansions from 4 to 9 mm. An expansion of 3 mm achieved 95% overlap in 88% of the cases. Some patients required larger expansions compared to others and small target volume was common among these more difficult cases. Neither breath-hold nor target position were observed to impact variability among observers. Some of the observers required larger expansions compared to others and those requiring the largest margins were not the same from patient to patient.Conclusion.Registration of cardiac CT to the planning CT contributed approximately 3 mm of uncertainty to the STAR targeting process. Accordingly, workflows in which target delineation is performed on cardiac CT should explicitly account for this uncertainty in the overall target margin assessment.
{"title":"Interobserver image registration variability impacts on stereotactic arrhythmia radioablation (STAR) target margins.","authors":"Jeremy S Bredfeldt, Arianna Liles, Yue-Houng Hu, Dianne Ferguson, Christian Guthier, David Hu, Scott Friesen, Kolade Agboola, John Whitaker, Hubert Cochet, Usha Tedrow, Ray Mak, Kelly Fitzgerald","doi":"10.1088/2057-1976/ae3b44","DOIUrl":"10.1088/2057-1976/ae3b44","url":null,"abstract":"<p><p><i>Background and purpose</i>. To determine the interobserver variability in registrations of cardiac computed tomography (CT) images and to assess the margins needed to account for the observed variability in the context of stereotactic arrhythmia radioablation (STAR).<i>Materials and methods.</i>STAR targets were delineated on cardiac CTs for fifteen consecutive patients. Ten expert observers were asked to rigidly register the cardiac CT images to corresponding planning CT images. Registrations all started with a fully automated registration step, followed by manual adjustments. The targets were transferred from cardiac to planning CT using each of the registrations along with one consensus registration for each patient. The margin needed for the consensus target to encompass each of the observer and fully automated targets was measured.<i>Results.</i>A total of 150 registrations were evaluated for this study. Manual registrations required an average (standard deviation) of 5 min, 55 s (2 min, 10 s) to perform. The automated registration, without manual intervention, required an expansion of 6 mm to achieve 95% overlap for 97% of patients. For the manual registrations, an expansion of 4 mm achieved 95% overlap for 97% of the patients and observers. The remaining 3% required expansions from 4 to 9 mm. An expansion of 3 mm achieved 95% overlap in 88% of the cases. Some patients required larger expansions compared to others and small target volume was common among these more difficult cases. Neither breath-hold nor target position were observed to impact variability among observers. Some of the observers required larger expansions compared to others and those requiring the largest margins were not the same from patient to patient.<i>Conclusion.</i>Registration of cardiac CT to the planning CT contributed approximately 3 mm of uncertainty to the STAR targeting process. Accordingly, workflows in which target delineation is performed on cardiac CT should explicitly account for this uncertainty in the overall target margin assessment.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2026-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146017396","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-03DOI: 10.1088/2057-1976/ae3b45
Jason Leung, Ledycnarf J Holanda, Laura Wheeler, Tom Chau
In-ear electroencephalography (EEG) systems offer several practical advantages over scalp-based EEG systems for non-invasive brain-computer interface (BCI) applications. However, the difficulty in fabricating in-ear EEG systems can limit their accessibility for BCI use cases. In this study, we developed a portable, low-cost wireless in-ear EEG device using commercially available components. In-ear EEG signals (referenced to left mastoid) from 5 adolescent participants were compared to scalp-EEG collected simultaneously during an alpha modulation task, various artifact induction tasks, and an auditory word-streaming BCI paradigm. Spectral analysis confirmed that the proposed in-ear EEG system could capture significantly increased alpha activity during eyes-closed relaxation in 3 of 5 participants, with a signal-to-noise ratio of 2.34 across all participants. In-ear EEG signals were most susceptible to horizontal head movement, coughing and vocalization artifacts but were relatively insensitive to ocular artifacts such as blinking. For the auditory streaming paradigm, the classifier decoded the presented stimuli from in-ear EEG signals only in 1 of 5 participants. Classification of the attended stream did not exceed chance levels. Contrast plots showing the difference between attended and unattended streams revealed reduced amplitudes of in-ear EEG responses relative to scalp-EEG responses. Hardware modifications are needed to amplify in-ear signals and measure electrode-skin impedances to improve the viability of in-ear EEG for BCI applications.
{"title":"Wireless in-ear EEG system for auditory brain-computer interface applications in adolescents.","authors":"Jason Leung, Ledycnarf J Holanda, Laura Wheeler, Tom Chau","doi":"10.1088/2057-1976/ae3b45","DOIUrl":"https://doi.org/10.1088/2057-1976/ae3b45","url":null,"abstract":"<p><p>In-ear electroencephalography (EEG) systems offer several practical advantages over scalp-based EEG systems for non-invasive brain-computer interface (BCI) applications. However, the difficulty in fabricating in-ear EEG systems can limit their accessibility for BCI use cases. In this study, we developed a portable, low-cost wireless in-ear EEG device using commercially available components. In-ear EEG signals (referenced to left mastoid) from 5 adolescent participants were compared to scalp-EEG collected simultaneously during an alpha modulation task, various artifact induction tasks, and an auditory word-streaming BCI paradigm. Spectral analysis confirmed that the proposed in-ear EEG system could capture significantly increased alpha activity during eyes-closed relaxation in 3 of 5 participants, with a signal-to-noise ratio of 2.34 across all participants. In-ear EEG signals were most susceptible to horizontal head movement, coughing and vocalization artifacts but were relatively insensitive to ocular artifacts such as blinking. For the auditory streaming paradigm, the classifier decoded the presented stimuli from in-ear EEG signals only in 1 of 5 participants. Classification of the attended stream did not exceed chance levels. Contrast plots showing the difference between attended and unattended streams revealed reduced amplitudes of in-ear EEG responses relative to scalp-EEG responses. Hardware modifications are needed to amplify in-ear signals and measure electrode-skin impedances to improve the viability of in-ear EEG for BCI applications.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":"12 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2026-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146112152","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-03DOI: 10.1088/2057-1976/ae3b46
Chaoyi Lyu, Lu Zhao, Yuan Xie, Wangyuan Zhao, Yufu Zhou, Hua Nong Ting, Puming Zhang, Jun Zhao
The rapid development of deep learning-based computational pathology and genomics has demonstrated the significant promise of effectively integrating whole slide images (WSIs) and genomic data for cancer survival prediction. However, the substantial heterogeneity between pathological and genomic features makes exploring complex cross-modal relationships and constructing comprehensive patient representations challenging. To address this, we propose the Information Compression-based Multimodal Confidence-guided Fusion Network (iMCN). The framework is built around two key modules. First, the Adaptive Pathology Information Compression (APIC) module employs learnable information centers to dynamically cluster image regions, removing redundant information while maintaining discriminative survival-related patterns. Second, the Confidence-guided Multimodal Fusion (CMF) module utilizes a learned sub-network to estimate the confidence of each modality's representation, allowing for dynamic weighted fusion that prioritizes the most reliable features in each case. Evaluated on the TCGA-LUAD and TCGA-BRCA cohorts, iMCN achieved average concordance index (C-index) values of 0.691 and 0.740, respectively, outperforming existing state-of-the-art methods by an absolute improvement of 1.65%. Qualitatively, the model generates interpretable heatmaps that localize high-association regions between specific morphological structures (e.g., tumor cell nests) and functional genomic pathways (e.g., oncogenesis), offering biological insights into genomic-pathologic linkages. In conclusion, iMCN significantly advances multimodal survival analysis by introducing a principled framework for information compression and confidence-based fusion. Besides, correlation analysis reveal that tissue heterogeneity influences optimal retention rates differently across cancer types, with higher-heterogeneity tumors (e.g., LUAD) benefiting more from aggressive information compression. Beyond its predictive performance, the model's ability to elucidate the interplay between tissue morphology and molecular biology enhances its value as a tool for translational cancer research.
{"title":"iMCN: information compression-based multimodal confidence-guided fusion network for cancer survival prediction.","authors":"Chaoyi Lyu, Lu Zhao, Yuan Xie, Wangyuan Zhao, Yufu Zhou, Hua Nong Ting, Puming Zhang, Jun Zhao","doi":"10.1088/2057-1976/ae3b46","DOIUrl":"10.1088/2057-1976/ae3b46","url":null,"abstract":"<p><p>The rapid development of deep learning-based computational pathology and genomics has demonstrated the significant promise of effectively integrating whole slide images (WSIs) and genomic data for cancer survival prediction. However, the substantial heterogeneity between pathological and genomic features makes exploring complex cross-modal relationships and constructing comprehensive patient representations challenging. To address this, we propose the Information Compression-based Multimodal Confidence-guided Fusion Network (iMCN). The framework is built around two key modules. First, the Adaptive Pathology Information Compression (APIC) module employs learnable information centers to dynamically cluster image regions, removing redundant information while maintaining discriminative survival-related patterns. Second, the Confidence-guided Multimodal Fusion (CMF) module utilizes a learned sub-network to estimate the confidence of each modality's representation, allowing for dynamic weighted fusion that prioritizes the most reliable features in each case. Evaluated on the TCGA-LUAD and TCGA-BRCA cohorts, iMCN achieved average concordance index (C-index) values of 0.691 and 0.740, respectively, outperforming existing state-of-the-art methods by an absolute improvement of 1.65%. Qualitatively, the model generates interpretable heatmaps that localize high-association regions between specific morphological structures (e.g., tumor cell nests) and functional genomic pathways (e.g., oncogenesis), offering biological insights into genomic-pathologic linkages. In conclusion, iMCN significantly advances multimodal survival analysis by introducing a principled framework for information compression and confidence-based fusion. Besides, correlation analysis reveal that tissue heterogeneity influences optimal retention rates differently across cancer types, with higher-heterogeneity tumors (e.g., LUAD) benefiting more from aggressive information compression. Beyond its predictive performance, the model's ability to elucidate the interplay between tissue morphology and molecular biology enhances its value as a tool for translational cancer research.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2026-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146017378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-03DOI: 10.1088/2057-1976/ae3571
Guillaume Houyoux, Kilian-Simon Baumann, Nick Reynaert
Objective.In the revised version of the TRS-398 Code of Practice (CoP), Monte Carlo (MC) results were added to existing experimental data to derive the recommended beam quality correction factors (kQ) for ionisation chambers in proton beams. While part of these results were obtained from versions v10.3 and v10.4 of the Geant4 simulation tool, this paper demonstrates that the use of a more recent version, such as v11.2, can affect the value of thekQfactors.Approach.The chamber-specific proton contributions (fQ) of thekQfactors were derived for four ionisation chambers using two different versions of the code, namely Geant4-v.10.3 and Geant4-v11.2. A comparison of the total absorbed dose values is performed, as well as the comparison of the dose contribution for primary and secondary particles.Main results.Larger absorbed dose values per incident particle were derived with Geant4-v11.2 compared to Geant4-v10.3 especially for dose-to-air at high proton beam energies between 150 MeV and 250 MeV, leading to deviations in thekQvalues up to 1%. These deviations are mainly due to a change in the physics of secondary helium ions for which the significant deviations between the Geant4 versions is the most stringent within the entrance window or the shell of the ionisation chambers.Significance.Although significant deviations in the MC calculatedfQvalues were observed between the two Geant4 versions, the dominant uncertainty of theWairvalues currently allows to achieve the agreement at thekQlevel. As these values also agree with the current data presented in the TRS-398 CoP, it is not possible at the moment to discriminate between Geant4-v10.3 and Geant4-v11.2, which are therefore both suitable forkQcalculation.
{"title":"Monte Carlo derivation of beam quality correction factors in proton beams: a comparison of Geant4 versions.","authors":"Guillaume Houyoux, Kilian-Simon Baumann, Nick Reynaert","doi":"10.1088/2057-1976/ae3571","DOIUrl":"10.1088/2057-1976/ae3571","url":null,"abstract":"<p><p><i>Objective.</i>In the revised version of the TRS-398 Code of Practice (CoP), Monte Carlo (MC) results were added to existing experimental data to derive the recommended beam quality correction factors (<i>k</i><sub><i>Q</i></sub>) for ionisation chambers in proton beams. While part of these results were obtained from versions v10.3 and v10.4 of the Geant4 simulation tool, this paper demonstrates that the use of a more recent version, such as v11.2, can affect the value of the<i>k</i><sub><i>Q</i></sub>factors.<i>Approach.</i>The chamber-specific proton contributions (<i>f</i><sub><i>Q</i></sub>) of the<i>k</i><sub><i>Q</i></sub>factors were derived for four ionisation chambers using two different versions of the code, namely Geant4-v.10.3 and Geant4-v11.2. A comparison of the total absorbed dose values is performed, as well as the comparison of the dose contribution for primary and secondary particles.<i>Main results.</i>Larger absorbed dose values per incident particle were derived with Geant4-v11.2 compared to Geant4-v10.3 especially for dose-to-air at high proton beam energies between 150 MeV and 250 MeV, leading to deviations in the<i>k</i><sub><i>Q</i></sub>values up to 1%. These deviations are mainly due to a change in the physics of secondary helium ions for which the significant deviations between the Geant4 versions is the most stringent within the entrance window or the shell of the ionisation chambers.<i>Significance.</i>Although significant deviations in the MC calculated<i>f</i><sub><i>Q</i></sub>values were observed between the two Geant4 versions, the dominant uncertainty of the<i>W</i><sub>air</sub>values currently allows to achieve the agreement at the<i>k</i><sub><i>Q</i></sub>level. As these values also agree with the current data presented in the TRS-398 CoP, it is not possible at the moment to discriminate between Geant4-v10.3 and Geant4-v11.2, which are therefore both suitable for<i>k</i><sub><i>Q</i></sub>calculation.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2026-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145931958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}