Pub Date : 2026-02-04DOI: 10.1088/2057-1976/ae3966
Jiayu Lin, Liwen Zou, Yiming Gao, Liang Mao, Ziwei Nie
Accurate and automatic registration of the pancreas between contrast-enhanced CT (CECT) and non-contrast CT (NCCT) images is crucial for diagnosing and treating pancreatic cancer. However, existing deep learning-based methods remain limited due to inherent intensity differences between modalities, which impair intensity-based similarity metrics, and the pancreas's small size, vague boundaries, and complex surroundings, which trap segmentation-based metrics in local optima. To address these challenges, we propose a weakly supervised registration framework incorporating a novel mixed loss function. This loss leverages Wasserstein distance to enforce anatomical topology consistency in 3D pancreas registration between CECT and NCCT. We employ distance transforms to build the small, uncertain and complex anatomical topology distribution of the pancreas. Unlike conventional voxel-wiseL1orL2loss, the Wasserstein distance directly measures the similarity between warped and fixed anatomical topologies of pancreas. Experiments on a dataset of 975 paired CECT-NCCT images from patients with seven pancreatic tumor types (PDAC, IPMN, MCN, SCN, SPT, CP, PNET), demonstrate that our method outperforms state-of-the-art weakly supervised approaches, achieving improvements of 3.2% in Dice score, reductions of 28.54% in false positive segmentation rate with 0.89% in Hausdorff distance. The source code will be made publicly available athttps://github.com/ZouLiwen-1999/WSMorph.
{"title":"Learning the anatomical topology consistency driven by Wasserstein distance for weakly supervised 3D pancreas registration in multi-phase CT images.","authors":"Jiayu Lin, Liwen Zou, Yiming Gao, Liang Mao, Ziwei Nie","doi":"10.1088/2057-1976/ae3966","DOIUrl":"10.1088/2057-1976/ae3966","url":null,"abstract":"<p><p>Accurate and automatic registration of the pancreas between contrast-enhanced CT (CECT) and non-contrast CT (NCCT) images is crucial for diagnosing and treating pancreatic cancer. However, existing deep learning-based methods remain limited due to inherent intensity differences between modalities, which impair intensity-based similarity metrics, and the pancreas's small size, vague boundaries, and complex surroundings, which trap segmentation-based metrics in local optima. To address these challenges, we propose a weakly supervised registration framework incorporating a novel mixed loss function. This loss leverages Wasserstein distance to enforce anatomical topology consistency in 3D pancreas registration between CECT and NCCT. We employ distance transforms to build the small, uncertain and complex anatomical topology distribution of the pancreas. Unlike conventional voxel-wise<i>L</i><sub>1</sub>or<i>L</i><sub>2</sub>loss, the Wasserstein distance directly measures the similarity between warped and fixed anatomical topologies of pancreas. Experiments on a dataset of 975 paired CECT-NCCT images from patients with seven pancreatic tumor types (PDAC, IPMN, MCN, SCN, SPT, CP, PNET), demonstrate that our method outperforms state-of-the-art weakly supervised approaches, achieving improvements of 3.2% in Dice score, reductions of 28.54% in false positive segmentation rate with 0.89% in Hausdorff distance. The source code will be made publicly available athttps://github.com/ZouLiwen-1999/WSMorph.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2026-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145987901","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-04DOI: 10.1088/2057-1976/ae3b44
Jeremy S Bredfeldt, Arianna Liles, Yue-Houng Hu, Dianne Ferguson, Christian Guthier, David Hu, Scott Friesen, Kolade Agboola, John Whitaker, Hubert Cochet, Usha Tedrow, Ray Mak, Kelly Fitzgerald
Background and purpose. To determine the interobserver variability in registrations of cardiac computed tomography (CT) images and to assess the margins needed to account for the observed variability in the context of stereotactic arrhythmia radioablation (STAR).Materials and methods.STAR targets were delineated on cardiac CTs for fifteen consecutive patients. Ten expert observers were asked to rigidly register the cardiac CT images to corresponding planning CT images. Registrations all started with a fully automated registration step, followed by manual adjustments. The targets were transferred from cardiac to planning CT using each of the registrations along with one consensus registration for each patient. The margin needed for the consensus target to encompass each of the observer and fully automated targets was measured.Results.A total of 150 registrations were evaluated for this study. Manual registrations required an average (standard deviation) of 5 min, 55 s (2 min, 10 s) to perform. The automated registration, without manual intervention, required an expansion of 6 mm to achieve 95% overlap for 97% of patients. For the manual registrations, an expansion of 4 mm achieved 95% overlap for 97% of the patients and observers. The remaining 3% required expansions from 4 to 9 mm. An expansion of 3 mm achieved 95% overlap in 88% of the cases. Some patients required larger expansions compared to others and small target volume was common among these more difficult cases. Neither breath-hold nor target position were observed to impact variability among observers. Some of the observers required larger expansions compared to others and those requiring the largest margins were not the same from patient to patient.Conclusion.Registration of cardiac CT to the planning CT contributed approximately 3 mm of uncertainty to the STAR targeting process. Accordingly, workflows in which target delineation is performed on cardiac CT should explicitly account for this uncertainty in the overall target margin assessment.
{"title":"Interobserver image registration variability impacts on stereotactic arrhythmia radioablation (STAR) target margins.","authors":"Jeremy S Bredfeldt, Arianna Liles, Yue-Houng Hu, Dianne Ferguson, Christian Guthier, David Hu, Scott Friesen, Kolade Agboola, John Whitaker, Hubert Cochet, Usha Tedrow, Ray Mak, Kelly Fitzgerald","doi":"10.1088/2057-1976/ae3b44","DOIUrl":"10.1088/2057-1976/ae3b44","url":null,"abstract":"<p><p><i>Background and purpose</i>. To determine the interobserver variability in registrations of cardiac computed tomography (CT) images and to assess the margins needed to account for the observed variability in the context of stereotactic arrhythmia radioablation (STAR).<i>Materials and methods.</i>STAR targets were delineated on cardiac CTs for fifteen consecutive patients. Ten expert observers were asked to rigidly register the cardiac CT images to corresponding planning CT images. Registrations all started with a fully automated registration step, followed by manual adjustments. The targets were transferred from cardiac to planning CT using each of the registrations along with one consensus registration for each patient. The margin needed for the consensus target to encompass each of the observer and fully automated targets was measured.<i>Results.</i>A total of 150 registrations were evaluated for this study. Manual registrations required an average (standard deviation) of 5 min, 55 s (2 min, 10 s) to perform. The automated registration, without manual intervention, required an expansion of 6 mm to achieve 95% overlap for 97% of patients. For the manual registrations, an expansion of 4 mm achieved 95% overlap for 97% of the patients and observers. The remaining 3% required expansions from 4 to 9 mm. An expansion of 3 mm achieved 95% overlap in 88% of the cases. Some patients required larger expansions compared to others and small target volume was common among these more difficult cases. Neither breath-hold nor target position were observed to impact variability among observers. Some of the observers required larger expansions compared to others and those requiring the largest margins were not the same from patient to patient.<i>Conclusion.</i>Registration of cardiac CT to the planning CT contributed approximately 3 mm of uncertainty to the STAR targeting process. Accordingly, workflows in which target delineation is performed on cardiac CT should explicitly account for this uncertainty in the overall target margin assessment.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2026-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146017396","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-03DOI: 10.1088/2057-1976/ae4105
Joshua Dugdale, Garrett Scott Black, Jordan Alexander Borrell
Functional near-infrared spectroscopy (fNIRS) is a portable, non-invasive brain imaging method with growing applications in neurorehabilitation. However, signal variability, driven in part by differences in data processing pipelines, remains a major barrier to its clinical adoption. This study compares the robustness of two common processing approaches, General Linear Model (GLM) and Block Averaging (BA), in detecting cortical activation across task complexities. Eighteen neurotypical, healthy adults completed a simple hand grasp task and a more complex gross manual dexterity task while fNIRS data were recorded and analyzed using the BA and GLM pipelines. Results revealed significant effects of both pipeline and task complexity on oxygenated and deoxygenated hemoglobin amplitudes. BA produced significantly larger responses than GLM, and complex tasks elicited significantly greater activation than simple tasks. Notably, only the BA-Complex subgroup showed significant differences from all other conditions, suggesting BA more effectively detects task-related hemodynamic changes. These findings emphasize the need for careful analysis pipeline selection to reduce variability and enhance fNIRS reliability in neurorehabilitation research.
{"title":"Investigating Functional Near-Infrared Spectroscopy Signal Variability: The Role of Processing Pipelines and Task Complexity.","authors":"Joshua Dugdale, Garrett Scott Black, Jordan Alexander Borrell","doi":"10.1088/2057-1976/ae4105","DOIUrl":"https://doi.org/10.1088/2057-1976/ae4105","url":null,"abstract":"<p><p>Functional near-infrared spectroscopy (fNIRS) is a portable, non-invasive brain imaging method with growing applications in neurorehabilitation. However, signal variability, driven in part by differences in data processing pipelines, remains a major barrier to its clinical adoption. This study compares the robustness of two common processing approaches, General Linear Model (GLM) and Block Averaging (BA), in detecting cortical activation across task complexities. Eighteen neurotypical, healthy adults completed a simple hand grasp task and a more complex gross manual dexterity task while fNIRS data were recorded and analyzed using the BA and GLM pipelines. Results revealed significant effects of both pipeline and task complexity on oxygenated and deoxygenated hemoglobin amplitudes. BA produced significantly larger responses than GLM, and complex tasks elicited significantly greater activation than simple tasks. Notably, only the BA-Complex subgroup showed significant differences from all other conditions, suggesting BA more effectively detects task-related hemodynamic changes. These findings emphasize the need for careful analysis pipeline selection to reduce variability and enhance fNIRS reliability in neurorehabilitation research.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2026-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146112187","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-03DOI: 10.1088/2057-1976/ae3b45
Jason Leung, Ledycnarf J Holanda, Laura Wheeler, Tom Chau
In-ear electroencephalography (EEG) systems offer several practical advantages over scalp-based EEG systems for non-invasive brain-computer interface (BCI) applications. However, the difficulty in fabricating in-ear EEG systems can limit their accessibility for BCI use cases. In this study, we developed a portable, low-cost wireless in-ear EEG device using commercially available components. In-ear EEG signals (referenced to left mastoid) from 5 adolescent participants were compared to scalp-EEG collected simultaneously during an alpha modulation task, various artifact induction tasks, and an auditory word-streaming BCI paradigm. Spectral analysis confirmed that the proposed in-ear EEG system could capture significantly increased alpha activity during eyes-closed relaxation in 3 of 5 participants, with a signal-to-noise ratio of 2.34 across all participants. In-ear EEG signals were most susceptible to horizontal head movement, coughing and vocalization artifacts but were relatively insensitive to ocular artifacts such as blinking. For the auditory streaming paradigm, the classifier decoded the presented stimuli from in-ear EEG signals only in 1 of 5 participants. Classification of the attended stream did not exceed chance levels. Contrast plots showing the difference between attended and unattended streams revealed reduced amplitudes of in-ear EEG responses relative to scalp-EEG responses. Hardware modifications are needed to amplify in-ear signals and measure electrode-skin impedances to improve the viability of in-ear EEG for BCI applications.
{"title":"Wireless in-ear EEG system for auditory brain-computer interface applications in adolescents.","authors":"Jason Leung, Ledycnarf J Holanda, Laura Wheeler, Tom Chau","doi":"10.1088/2057-1976/ae3b45","DOIUrl":"https://doi.org/10.1088/2057-1976/ae3b45","url":null,"abstract":"<p><p>In-ear electroencephalography (EEG) systems offer several practical advantages over scalp-based EEG systems for non-invasive brain-computer interface (BCI) applications. However, the difficulty in fabricating in-ear EEG systems can limit their accessibility for BCI use cases. In this study, we developed a portable, low-cost wireless in-ear EEG device using commercially available components. In-ear EEG signals (referenced to left mastoid) from 5 adolescent participants were compared to scalp-EEG collected simultaneously during an alpha modulation task, various artifact induction tasks, and an auditory word-streaming BCI paradigm. Spectral analysis confirmed that the proposed in-ear EEG system could capture significantly increased alpha activity during eyes-closed relaxation in 3 of 5 participants, with a signal-to-noise ratio of 2.34 across all participants. In-ear EEG signals were most susceptible to horizontal head movement, coughing and vocalization artifacts but were relatively insensitive to ocular artifacts such as blinking. For the auditory streaming paradigm, the classifier decoded the presented stimuli from in-ear EEG signals only in 1 of 5 participants. Classification of the attended stream did not exceed chance levels. Contrast plots showing the difference between attended and unattended streams revealed reduced amplitudes of in-ear EEG responses relative to scalp-EEG responses. Hardware modifications are needed to amplify in-ear signals and measure electrode-skin impedances to improve the viability of in-ear EEG for BCI applications.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":"12 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2026-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146112152","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-03DOI: 10.1088/2057-1976/ae3b46
Chaoyi Lyu, Lu Zhao, Yuan Xie, Wangyuan Zhao, Yufu Zhou, Hua Nong Ting, Puming Zhang, Jun Zhao
The rapid development of deep learning-based computational pathology and genomics has demonstrated the significant promise of effectively integrating whole slide images (WSIs) and genomic data for cancer survival prediction. However, the substantial heterogeneity between pathological and genomic features makes exploring complex cross-modal relationships and constructing comprehensive patient representations challenging. To address this, we propose the Information Compression-based Multimodal Confidence-guided Fusion Network (iMCN). The framework is built around two key modules. First, the Adaptive Pathology Information Compression (APIC) module employs learnable information centers to dynamically cluster image regions, removing redundant information while maintaining discriminative survival-related patterns. Second, the Confidence-guided Multimodal Fusion (CMF) module utilizes a learned sub-network to estimate the confidence of each modality's representation, allowing for dynamic weighted fusion that prioritizes the most reliable features in each case. Evaluated on the TCGA-LUAD and TCGA-BRCA cohorts, iMCN achieved average concordance index (C-index) values of 0.691 and 0.740, respectively, outperforming existing state-of-the-art methods by an absolute improvement of 1.65%. Qualitatively, the model generates interpretable heatmaps that localize high-association regions between specific morphological structures (e.g., tumor cell nests) and functional genomic pathways (e.g., oncogenesis), offering biological insights into genomic-pathologic linkages. In conclusion, iMCN significantly advances multimodal survival analysis by introducing a principled framework for information compression and confidence-based fusion. Besides, correlation analysis reveal that tissue heterogeneity influences optimal retention rates differently across cancer types, with higher-heterogeneity tumors (e.g., LUAD) benefiting more from aggressive information compression. Beyond its predictive performance, the model's ability to elucidate the interplay between tissue morphology and molecular biology enhances its value as a tool for translational cancer research.
{"title":"iMCN: information compression-based multimodal confidence-guided fusion network for cancer survival prediction.","authors":"Chaoyi Lyu, Lu Zhao, Yuan Xie, Wangyuan Zhao, Yufu Zhou, Hua Nong Ting, Puming Zhang, Jun Zhao","doi":"10.1088/2057-1976/ae3b46","DOIUrl":"10.1088/2057-1976/ae3b46","url":null,"abstract":"<p><p>The rapid development of deep learning-based computational pathology and genomics has demonstrated the significant promise of effectively integrating whole slide images (WSIs) and genomic data for cancer survival prediction. However, the substantial heterogeneity between pathological and genomic features makes exploring complex cross-modal relationships and constructing comprehensive patient representations challenging. To address this, we propose the Information Compression-based Multimodal Confidence-guided Fusion Network (iMCN). The framework is built around two key modules. First, the Adaptive Pathology Information Compression (APIC) module employs learnable information centers to dynamically cluster image regions, removing redundant information while maintaining discriminative survival-related patterns. Second, the Confidence-guided Multimodal Fusion (CMF) module utilizes a learned sub-network to estimate the confidence of each modality's representation, allowing for dynamic weighted fusion that prioritizes the most reliable features in each case. Evaluated on the TCGA-LUAD and TCGA-BRCA cohorts, iMCN achieved average concordance index (C-index) values of 0.691 and 0.740, respectively, outperforming existing state-of-the-art methods by an absolute improvement of 1.65%. Qualitatively, the model generates interpretable heatmaps that localize high-association regions between specific morphological structures (e.g., tumor cell nests) and functional genomic pathways (e.g., oncogenesis), offering biological insights into genomic-pathologic linkages. In conclusion, iMCN significantly advances multimodal survival analysis by introducing a principled framework for information compression and confidence-based fusion. Besides, correlation analysis reveal that tissue heterogeneity influences optimal retention rates differently across cancer types, with higher-heterogeneity tumors (e.g., LUAD) benefiting more from aggressive information compression. Beyond its predictive performance, the model's ability to elucidate the interplay between tissue morphology and molecular biology enhances its value as a tool for translational cancer research.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2026-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146017378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-03DOI: 10.1088/2057-1976/ae4108
Xiaojing Hou, Yonghong Wu
Efficient and accurate image segmentation models play a vital role in medical image segmentation, however, high computational cost of traditional models limits clinical deployment. Based on pyramid visual transformers and convolutional neural networks, this paper proposes a lightweight Context Contrast Enhancement Network (CCE-Net) that ensures efficient inference and achieves accurate segmentation through the contextual feature synergy mechanism and feature contrast enhancement strategy. The Local Context Fusion Enhancement module is designed to obtain more specific local detail information through cross-layer context fusion and bridge the semantic gap between the encoder and decoder. The Deep Feature Multi-scale Extraction module is proposed to fully extract the comprehensive information about the deepest features in the bottleneck layer of the model and provide more accurate global contextual features for the decoder. The Detail Contrast Enhancement Decoder module is designed to effectively solve the inherent problems of missing image details and blurred edges through adaptive dual-branch feature fusion and frequency-domain contrast enhancement operations. Experiments show that CCE-Net only requires 5.40M parameters and 0.80G FLOPs, and the average Dice coefficients on the Synapse and ACDC datasets are 82.25% and 91.88%, respectively, which are 37%-62% less than the parameters of mainstream models, promoting the transformation of lightweight medical AI models from laboratory research to clinical practice.
{"title":"CCE-Net: A Lightweight Context Contrast Enhancement Network and Its Application in Medical Image Segmentation.","authors":"Xiaojing Hou, Yonghong Wu","doi":"10.1088/2057-1976/ae4108","DOIUrl":"10.1088/2057-1976/ae4108","url":null,"abstract":"<p><p>Efficient and accurate image segmentation models play a vital role in medical image segmentation, however, high computational cost of traditional models limits clinical deployment. Based on pyramid visual transformers and convolutional neural networks, this paper proposes a lightweight Context Contrast Enhancement Network (CCE-Net) that ensures efficient inference and achieves accurate segmentation through the contextual feature synergy mechanism and feature contrast enhancement strategy. The Local Context Fusion Enhancement module is designed to obtain more specific local detail information through cross-layer context fusion and bridge the semantic gap between the encoder and decoder. The Deep Feature Multi-scale Extraction module is proposed to fully extract the comprehensive information about the deepest features in the bottleneck layer of the model and provide more accurate global contextual features for the decoder. The Detail Contrast Enhancement Decoder module is designed to effectively solve the inherent problems of missing image details and blurred edges through adaptive dual-branch feature fusion and frequency-domain contrast enhancement operations. Experiments show that CCE-Net only requires 5.40M parameters and 0.80G FLOPs, and the average Dice coefficients on the Synapse and ACDC datasets are 82.25% and 91.88%, respectively, which are 37%-62% less than the parameters of mainstream models, promoting the transformation of lightweight medical AI models from laboratory research to clinical practice.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2026-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146112200","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-03DOI: 10.1088/2057-1976/ae3571
Guillaume Houyoux, Kilian-Simon Baumann, Nick Reynaert
Objective.In the revised version of the TRS-398 Code of Practice (CoP), Monte Carlo (MC) results were added to existing experimental data to derive the recommended beam quality correction factors (kQ) for ionisation chambers in proton beams. While part of these results were obtained from versions v10.3 and v10.4 of the Geant4 simulation tool, this paper demonstrates that the use of a more recent version, such as v11.2, can affect the value of thekQfactors.Approach.The chamber-specific proton contributions (fQ) of thekQfactors were derived for four ionisation chambers using two different versions of the code, namely Geant4-v.10.3 and Geant4-v11.2. A comparison of the total absorbed dose values is performed, as well as the comparison of the dose contribution for primary and secondary particles.Main results.Larger absorbed dose values per incident particle were derived with Geant4-v11.2 compared to Geant4-v10.3 especially for dose-to-air at high proton beam energies between 150 MeV and 250 MeV, leading to deviations in thekQvalues up to 1%. These deviations are mainly due to a change in the physics of secondary helium ions for which the significant deviations between the Geant4 versions is the most stringent within the entrance window or the shell of the ionisation chambers.Significance.Although significant deviations in the MC calculatedfQvalues were observed between the two Geant4 versions, the dominant uncertainty of theWairvalues currently allows to achieve the agreement at thekQlevel. As these values also agree with the current data presented in the TRS-398 CoP, it is not possible at the moment to discriminate between Geant4-v10.3 and Geant4-v11.2, which are therefore both suitable forkQcalculation.
{"title":"Monte Carlo derivation of beam quality correction factors in proton beams: a comparison of Geant4 versions.","authors":"Guillaume Houyoux, Kilian-Simon Baumann, Nick Reynaert","doi":"10.1088/2057-1976/ae3571","DOIUrl":"10.1088/2057-1976/ae3571","url":null,"abstract":"<p><p><i>Objective.</i>In the revised version of the TRS-398 Code of Practice (CoP), Monte Carlo (MC) results were added to existing experimental data to derive the recommended beam quality correction factors (<i>k</i><sub><i>Q</i></sub>) for ionisation chambers in proton beams. While part of these results were obtained from versions v10.3 and v10.4 of the Geant4 simulation tool, this paper demonstrates that the use of a more recent version, such as v11.2, can affect the value of the<i>k</i><sub><i>Q</i></sub>factors.<i>Approach.</i>The chamber-specific proton contributions (<i>f</i><sub><i>Q</i></sub>) of the<i>k</i><sub><i>Q</i></sub>factors were derived for four ionisation chambers using two different versions of the code, namely Geant4-v.10.3 and Geant4-v11.2. A comparison of the total absorbed dose values is performed, as well as the comparison of the dose contribution for primary and secondary particles.<i>Main results.</i>Larger absorbed dose values per incident particle were derived with Geant4-v11.2 compared to Geant4-v10.3 especially for dose-to-air at high proton beam energies between 150 MeV and 250 MeV, leading to deviations in the<i>k</i><sub><i>Q</i></sub>values up to 1%. These deviations are mainly due to a change in the physics of secondary helium ions for which the significant deviations between the Geant4 versions is the most stringent within the entrance window or the shell of the ionisation chambers.<i>Significance.</i>Although significant deviations in the MC calculated<i>f</i><sub><i>Q</i></sub>values were observed between the two Geant4 versions, the dominant uncertainty of the<i>W</i><sub>air</sub>values currently allows to achieve the agreement at the<i>k</i><sub><i>Q</i></sub>level. As these values also agree with the current data presented in the TRS-398 CoP, it is not possible at the moment to discriminate between Geant4-v10.3 and Geant4-v11.2, which are therefore both suitable for<i>k</i><sub><i>Q</i></sub>calculation.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2026-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145931958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Knee Osteoarthritis (KOA) is a prevalent degenerative joint disease affecting millions worldwide. Accurate classification of KOA severity is crucial for effective diagnosis and treatment planning. This study introduces a novel multi-classification algorithm for x-ray KOA based on MambaOut and Latent Diffusion Model (LDM). MambaOut, an emerging network architecture, achieves superior classification performance compared to fine-tuning the mainstream Convolutional Neural Networks (CNNs) for KOA classification. To address sample imbalance across KL grades, we propose an AI-generated model using LDM to synthesize new data. This approach enhances minority-class samples by optimizing the autoencoder's loss function and incorporating pathological labels into the LDM framework. Our approach achieves an average accuracy of 86.3%, an average precision of 85.3%, an F1 score of 0.855, and a mean absolute error reduced to 14.7% in the four-classification task, outperforming recent advanced methods. This study not only advances KOA classification techniques but also highlights the potential of integrating advanced neural architectures with generative models for medical image analysis.
{"title":"Enhanced x-ray knee osteoarthritis classification: a multi-classification approach using MambaOut and latent diffusion model.","authors":"Xin Wang, Yupeng Fu, Xiaodong Cai, Huimin Lu, Yuncong Feng, Rui Xu","doi":"10.1088/2057-1976/ae3b43","DOIUrl":"10.1088/2057-1976/ae3b43","url":null,"abstract":"<p><p>Knee Osteoarthritis (KOA) is a prevalent degenerative joint disease affecting millions worldwide. Accurate classification of KOA severity is crucial for effective diagnosis and treatment planning. This study introduces a novel multi-classification algorithm for x-ray KOA based on MambaOut and Latent Diffusion Model (LDM). MambaOut, an emerging network architecture, achieves superior classification performance compared to fine-tuning the mainstream Convolutional Neural Networks (CNNs) for KOA classification. To address sample imbalance across KL grades, we propose an AI-generated model using LDM to synthesize new data. This approach enhances minority-class samples by optimizing the autoencoder's loss function and incorporating pathological labels into the LDM framework. Our approach achieves an average accuracy of 86.3%, an average precision of 85.3%, an F1 score of 0.855, and a mean absolute error reduced to 14.7% in the four-classification task, outperforming recent advanced methods. This study not only advances KOA classification techniques but also highlights the potential of integrating advanced neural architectures with generative models for medical image analysis.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2026-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146017388","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-02DOI: 10.1088/2057-1976/ae4030
Josephine La Macchia, Alessandro Desy, Claire Cohalan, Taehyung Peter Kim, Shirin A Enger
Objective: In radioembolization, SPECT/CT planning scans are often acquired during free breathing, which can introduce motion-related blurring and misregistration between SPECT and CT, leading to dosimetric inaccuracies. This study quantifies the impact of respiratory motion on absorbed dose metrics-tumor-to-normal tissue (T/N) ratio, dose volume histograms, and mean dose-using several voxel-based dosimetry methods. This study supports standardization efforts through experimental measurements using a motion-enabled phantom.
Approach: Motion effects in pre-therapy imaging were evaluated using a Jaszczak phantom filled with technetium-99m, simulating activity in lesion and background volumes. SPECT/CT scans were acquired with varying cranial-caudal motion amplitudes from the central position ±0, ±5, ±6.5, ±10, ±12.5, and ±15 mm. The impact of motion-related misregistration during scanning on dosimetry was also examined. Five dosimetry methods, including Monte Carlo simulation with uniform reference activity (MC REF), Monte Carlo simulation based on SPECT images (MC SPECT), Simplicity™ (Boston Scientific), local deposition method, and voxel-S-value convolution. Absorbed dose metrics of mean dose, dose volume histogram dosimetric indices (D50, D70, D90), and T/N ratio were obtained to quantify motion effects and evaluate clinical suitability.
Main results: Mean absorbed dose values for the lesion and background were consistent across methods within uncertainties, though discrepancies were noted in non-lesion low-density regions. Respiratory motion reduced lesion dose by 16-25% and increased background dose by 13-32%, although the latter represented only a 1-2 Gy change. These shifts led to a 28-43% decrease in the T/N ratio at ±12.5 mm motion amplitude. Misregistration due to motion also significantly impacted dosimetric accuracy.
Significance: The study demonstrated agreement between five dosimetry methods and revealed that respiratory motion can lead to substantial underestimation of the lesion dose and T/N ratio. Since T/N ratio is critical for patient selection and activity prescription, accounting for respiratory motion is essential for accurate radioembolization dosimetry.
{"title":"Quantifying respiratory motion effects on dosimetry in hepatic radioembolization using experimental phantom measurements.","authors":"Josephine La Macchia, Alessandro Desy, Claire Cohalan, Taehyung Peter Kim, Shirin A Enger","doi":"10.1088/2057-1976/ae4030","DOIUrl":"https://doi.org/10.1088/2057-1976/ae4030","url":null,"abstract":"<p><strong>Objective: </strong>In radioembolization, SPECT/CT planning scans are often acquired during free breathing, which can introduce motion-related blurring and misregistration between SPECT and CT, leading to dosimetric inaccuracies. This study quantifies the impact of respiratory motion on absorbed dose metrics-tumor-to-normal tissue (T/N) ratio, dose volume histograms, and mean dose-using several voxel-based dosimetry methods. This study supports standardization efforts through experimental measurements using a motion-enabled phantom.

Approach: Motion effects in pre-therapy imaging were evaluated using a Jaszczak phantom filled with technetium-99m, simulating activity in lesion and background volumes. SPECT/CT scans were acquired with varying cranial-caudal motion amplitudes from the central position ±0, ±5, ±6.5, ±10, ±12.5, and ±15 mm. The impact of motion-related misregistration during scanning on dosimetry was also examined. Five dosimetry methods, including Monte Carlo simulation with uniform reference activity (MC REF), Monte Carlo simulation based on SPECT images (MC SPECT), Simplicity™ (Boston Scientific), local deposition method, and voxel-S-value convolution. Absorbed dose metrics of mean dose, dose volume histogram dosimetric indices (D50, D70, D90), and T/N ratio were obtained to quantify motion effects and evaluate clinical suitability.

Main results: Mean absorbed dose values for the lesion and background were consistent across methods within uncertainties, though discrepancies were noted in non-lesion low-density regions. Respiratory motion reduced lesion dose by 16-25% and increased background dose by 13-32%, although the latter represented only a 1-2 Gy change. These shifts led to a 28-43% decrease in the T/N ratio at ±12.5 mm motion amplitude. Misregistration due to motion also significantly impacted dosimetric accuracy.
Significance: The study demonstrated agreement between five dosimetry methods and revealed that respiratory motion can lead to substantial underestimation of the lesion dose and T/N ratio. Since T/N ratio is critical for patient selection and activity prescription, accounting for respiratory motion is essential for accurate radioembolization dosimetry.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2026-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146103673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-30DOI: 10.1088/2057-1976/ae38de
M Sreenivasan, S Madhavendranath, Anu Mary Chacko
Electronic health records (EHRs) capture longitudinal multi-visit patient journeys but are difficult to analyze due to temporal irregularity, multimorbidity, and heterogeneous coding. This study introduces a temporal and comorbidity-aware trajectory representation that restructures admissions into ordered symbolic visit states while preserving diagnostic progression, secondary comorbidities, procedure categories, demographics, outcomes, and inter-visit intervals. These symbolic states are subsequently encoded as fixed-length numerical vectors suitable for computational analysis. Validation was conducted in two stages: Stage I assessed construction fidelity using coverage metrics, comorbidity preservation, diagnostic transition structures, and exact inter-visit gap encoding and Stage II assessed analytical utility through clustering experiments using different clustering approacheslike sequence similarity, Gaussian Mixture Models (GMM), and a temporal LSTM autoencoder (TS-LSTM). Proof of concept was done by encoding subset of patient cohorts from the MIMIC-IV database consisting of 2,280 patients with 8,849 admissions having complete primary diagnosis coverage and near-complete secondary coverage. Stage 1 assessment consisting of cohort-level coverage metrics confirmed that the transformation preserved essential clinical information and key properties of longitudinal EHRs. In Stage 2, clustering experiments validated the analytical utility of the representation across sequence-based, Gaussian mixture, and temporal LSTM autoencoder approaches. Ablation studies further demonstrated that both multimorbidity depth and inter-visit gap encoding are critical to maintaining cluster separability and temporal fidelity. The findings show that explicit encoding of comorbidity and timing improves interpretability and subgroup coherence. Although evaluated on a single dataset, the use of standardised ICD-10 EHR structure supports the assumption that the framework can generalise across healthcare settings; future work will incorporate multimodal data and external validation.
{"title":"Temporal and comorbidity-aware representation of longitudinal patient trajectories from electronic health records.","authors":"M Sreenivasan, S Madhavendranath, Anu Mary Chacko","doi":"10.1088/2057-1976/ae38de","DOIUrl":"10.1088/2057-1976/ae38de","url":null,"abstract":"<p><p>Electronic health records (EHRs) capture longitudinal multi-visit patient journeys but are difficult to analyze due to temporal irregularity, multimorbidity, and heterogeneous coding. This study introduces a temporal and comorbidity-aware trajectory representation that restructures admissions into ordered symbolic visit states while preserving diagnostic progression, secondary comorbidities, procedure categories, demographics, outcomes, and inter-visit intervals. These symbolic states are subsequently encoded as fixed-length numerical vectors suitable for computational analysis. Validation was conducted in two stages: Stage I assessed construction fidelity using coverage metrics, comorbidity preservation, diagnostic transition structures, and exact inter-visit gap encoding and Stage II assessed analytical utility through clustering experiments using different clustering approacheslike sequence similarity, Gaussian Mixture Models (GMM), and a temporal LSTM autoencoder (TS-LSTM). Proof of concept was done by encoding subset of patient cohorts from the MIMIC-IV database consisting of 2,280 patients with 8,849 admissions having complete primary diagnosis coverage and near-complete secondary coverage. Stage 1 assessment consisting of cohort-level coverage metrics confirmed that the transformation preserved essential clinical information and key properties of longitudinal EHRs. In Stage 2, clustering experiments validated the analytical utility of the representation across sequence-based, Gaussian mixture, and temporal LSTM autoencoder approaches. Ablation studies further demonstrated that both multimorbidity depth and inter-visit gap encoding are critical to maintaining cluster separability and temporal fidelity. The findings show that explicit encoding of comorbidity and timing improves interpretability and subgroup coherence. Although evaluated on a single dataset, the use of standardised ICD-10 EHR structure supports the assumption that the framework can generalise across healthcare settings; future work will incorporate multimodal data and external validation.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2026-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145984350","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}