Pub Date : 2025-12-17DOI: 10.1088/2057-1976/ae2689
Nausheen Ansari, Yusuf Khan, Omar Farooq
Millions of adults suffer from Major Depressive Disorder (MDD), globally. Applying network theory to study functional brain dynamics often use fMRI modality to identify the perturbed connectivity in depressed individuals. However, the weak temporal resolution of fMRI limits its ability to access the fast dynamics of functional connectivity (FC). Therefore, Electroencephalography (EEG), which can track functional brain dynamics every millisecond, may serve as a diagnostic marker to utilizing the dynamics of intrinsic brain networks at the sensor level. This research proposes a unique neural marker for depression detection by analyzing long-range functional neurodynamics between the default mode network (DMN) and visual network (VN) via optimal EEG nodes. While DMN abnormalities in depression are well documented, the interactions between the DMN and VN, which reflect visual imagery at rest, remain unclear. Subsequently, a novel differential graph centrality index is applied to reduce a high-dimensional feature space representing EEG temporal neurodynamics, which produced an optimized brain network for MDD detection. The proposed method achieves an exceptional classification performance with an average accuracy, f1 score, and MCC of 99.76%, 0.998, and 0.9995 for the MODMA and 99.99%, 0.999 and 0.9998 for the HUSM datasets, respectively. The findings of this study suggests that a significant decrease in connection density within the beta band (15-30 Hz) in depressed individuals exhibits disrupted long-range inter-network topology, which could serve as a reliable neural marker for depression detection and monitoring. Furthermore, weak FC links between the DMN and VN indicate disengagement between the DMN and VN, which signifies progressive cognitive decline, weak memory, and disrupted thinking at rest, often accompanied by MDD.
{"title":"An optimized EEG-based intrinsic brain network for depression detection using differential graph centrality.","authors":"Nausheen Ansari, Yusuf Khan, Omar Farooq","doi":"10.1088/2057-1976/ae2689","DOIUrl":"10.1088/2057-1976/ae2689","url":null,"abstract":"<p><p>Millions of adults suffer from Major Depressive Disorder (MDD), globally. Applying network theory to study functional brain dynamics often use fMRI modality to identify the perturbed connectivity in depressed individuals. However, the weak temporal resolution of fMRI limits its ability to access the fast dynamics of functional connectivity (FC). Therefore, Electroencephalography (EEG), which can track functional brain dynamics every millisecond, may serve as a diagnostic marker to utilizing the dynamics of intrinsic brain networks at the sensor level. This research proposes a unique neural marker for depression detection by analyzing long-range functional neurodynamics between the default mode network (DMN) and visual network (VN) via optimal EEG nodes. While DMN abnormalities in depression are well documented, the interactions between the DMN and VN, which reflect visual imagery at rest, remain unclear. Subsequently, a novel differential graph centrality index is applied to reduce a high-dimensional feature space representing EEG temporal neurodynamics, which produced an optimized brain network for MDD detection. The proposed method achieves an exceptional classification performance with an average accuracy, f1 score, and MCC of 99.76%, 0.998, and 0.9995 for the MODMA and 99.99%, 0.999 and 0.9998 for the HUSM datasets, respectively. The findings of this study suggests that a significant decrease in connection density within the beta band (15-30 Hz) in depressed individuals exhibits disrupted long-range inter-network topology, which could serve as a reliable neural marker for depression detection and monitoring. Furthermore, weak FC links between the DMN and VN indicate disengagement between the DMN and VN, which signifies progressive cognitive decline, weak memory, and disrupted thinking at rest, often accompanied by MDD.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2025-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145660172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-16DOI: 10.1088/2057-1976/ae27d5
E A Lorenz, X Su, N Skjæret-Maroni
Objective.While peripheral mechanisms of proprioception are well understood, the cortical processing of its feedback during dynamic and complex movements remains less clear. Corticokinematic coherence (CKC), which quantifies the coupling between limb movements and sensorimotor cortex activity, offers a way to investigate this cortical processing. However, ecologically valid CKC assessment poses technical challenges. Thus, by integrating Electroencephalography (EEG) with Human Pose Estimation (HPE), this study validates the feasibility and validity of a novel methodology for measuring CKC during upper-limb movements in real-world and virtual reality (VR) settings.Approach.Nine healthy adults performed repetitive finger-tapping (1 Hz) and reaching (0.5 Hz) tasks in real and VR settings. Their execution was recorded temporally synchronized using a 64-channel EEG, optical marker-based motion capture, and monocular deep-learning-based HPE via Mediapipe. Alongside the CKC, the kinematic agreement between both systems was assessed.Main results.CKC was detected using both marker-based and HPE-based kinematics across tasks and environments, with significant coherence observed in most participants. HPE-derived CKC closely matched marker-based measurements for most joints, exhibiting strong reliability and equivalent coherence magnitudes between real and VR conditions.Significance.This study validates a noninvasive and portable EEG-HPE approach for assessing cortical proprioceptive processing in ecologically valid settings, enabling broader clinical and rehabilitation applications.
{"title":"Evaluating corticokinematic coherence using electroencephalography and human pose estimation.","authors":"E A Lorenz, X Su, N Skjæret-Maroni","doi":"10.1088/2057-1976/ae27d5","DOIUrl":"10.1088/2057-1976/ae27d5","url":null,"abstract":"<p><p><i>Objective.</i>While peripheral mechanisms of proprioception are well understood, the cortical processing of its feedback during dynamic and complex movements remains less clear. Corticokinematic coherence (CKC), which quantifies the coupling between limb movements and sensorimotor cortex activity, offers a way to investigate this cortical processing. However, ecologically valid CKC assessment poses technical challenges. Thus, by integrating Electroencephalography (EEG) with Human Pose Estimation (HPE), this study validates the feasibility and validity of a novel methodology for measuring CKC during upper-limb movements in real-world and virtual reality (VR) settings.<i>Approach.</i>Nine healthy adults performed repetitive finger-tapping (1 Hz) and reaching (0.5 Hz) tasks in real and VR settings. Their execution was recorded temporally synchronized using a 64-channel EEG, optical marker-based motion capture, and monocular deep-learning-based HPE via Mediapipe. Alongside the CKC, the kinematic agreement between both systems was assessed.<i>Main results.</i>CKC was detected using both marker-based and HPE-based kinematics across tasks and environments, with significant coherence observed in most participants. HPE-derived CKC closely matched marker-based measurements for most joints, exhibiting strong reliability and equivalent coherence magnitudes between real and VR conditions.<i>Significance.</i>This study validates a noninvasive and portable EEG-HPE approach for assessing cortical proprioceptive processing in ecologically valid settings, enabling broader clinical and rehabilitation applications.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2025-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145676290","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-15DOI: 10.1088/2057-1976/ae2c8e
Leonard Brainaparte Kwee, Marlin Ramadhan Baidillah, Muhammad Nurul Puji, Winda Astuti
Accurate electrode placement is critical for improving image fidelity in lung Electrical Impedance Tomography (EIT), yet current systems rely on simplified circular templates that neglect patient-specific anatomical variation. This paper presents a novel, low-cost pipeline that uses smartphone-based photogrammetry to generate individualized 3D torso reconstructions for boundary-aligned electrode placement. The method includes automated video frame extraction, mesh post-processing, interactive 2D boundary extraction, real-world anatomical scaling, and both manual and automatic electrode detection. We evaluate two photogrammetry pipelines - commercial (RealityCapture) and open-source (Meshroom + MeshLab) - across five subjects including a mannequin and four human participants. Results demonstrate sub-centimeter Mean Absolute Error (MAE 0.42-0.60 cm) and Mean Percentage Error (MPE 8.56-11.51%) in electrode placement accuracy. Repeatability analysis shows good consistency with Coefficient of Variation below 15% for MPE and 19% for MAE. The generated subject-specific finite element meshes achieve 98.79% accuracy in cross-sectional area compared to direct measurements. While the current implementation requires 15-30 minutes processing time and multiple software tools, it establishes a foundation for more precise and personalized bioimpedance imaging that could benefit both clinical EIT and broader applications in neurological and industrial domains.
{"title":"2D Boundary Shape Detection Based on Camera for Enhanced Electrode Placement in Lung Electrical Impedance Tomography.","authors":"Leonard Brainaparte Kwee, Marlin Ramadhan Baidillah, Muhammad Nurul Puji, Winda Astuti","doi":"10.1088/2057-1976/ae2c8e","DOIUrl":"https://doi.org/10.1088/2057-1976/ae2c8e","url":null,"abstract":"<p><p>Accurate electrode placement is critical for improving image fidelity in lung Electrical Impedance Tomography (EIT), yet current systems rely on simplified circular templates that neglect patient-specific anatomical variation. This paper presents a novel, low-cost pipeline that uses smartphone-based photogrammetry to generate individualized 3D torso reconstructions for boundary-aligned electrode placement. The method includes automated video frame extraction, mesh post-processing, interactive 2D boundary extraction, real-world anatomical scaling, and both manual and automatic electrode detection. We evaluate two photogrammetry pipelines - commercial (RealityCapture) and open-source (Meshroom + MeshLab) - across five subjects including a mannequin and four human participants. Results demonstrate sub-centimeter Mean Absolute Error (MAE 0.42-0.60 cm) and Mean Percentage Error (MPE 8.56-11.51%) in electrode placement accuracy. Repeatability analysis shows good consistency with Coefficient of Variation below 15% for MPE and 19% for MAE. The generated subject-specific finite element meshes achieve 98.79% accuracy in cross-sectional area compared to direct measurements. While the current implementation requires 15-30 minutes processing time and multiple software tools, it establishes a foundation for more precise and personalized bioimpedance imaging that could benefit both clinical EIT and broader applications in neurological and industrial domains.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2025-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145761961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-15DOI: 10.1088/2057-1976/ae2772
Kaiwei Hu, Yong Wang, Kaixiang Tu, Hongxiang Guo, Jun Yan
The recognition of steady-state visual evoked potential (SSVEP) signals in brain-computer interface (BCI) systems is challenging due to the lack of training data and significant inter-subject variability. To address this, we propose a novel unsupervised transfer learning framework that enhances SSVEP recognition without requiring any subject-specific calibration. Our method employs a three-stage pipeline: (1) preprocessing with similarity-aware subject selection and Euclidean alignment to mitigate domain shifts; (2) hybrid feature extraction combining canonical correlation analysis (CCA) and task-related component analysis (TRCA) to enhance signal-to-noise ratio and phase sensitivity; and (3) weighted correlation fusion for robust classification. Extensive evaluations on the Benchmark and BETA datasets demonstrate that our approach achieves state-of-the-art performance, with average accuracies of 83.20% and 69.08% at 1 s data length, respectively-significantly outperforming existing methods like ttCCA and Ensemble-DNN. The highest information transfer rate reaches 157.53 bits min-1, underscoring the framework's practical potential for plug-and-play SSVEP-based BCIs.
{"title":"Cross-domain correlation analysis to improve SSVEP signals recognition in brain-computer interfaces.","authors":"Kaiwei Hu, Yong Wang, Kaixiang Tu, Hongxiang Guo, Jun Yan","doi":"10.1088/2057-1976/ae2772","DOIUrl":"10.1088/2057-1976/ae2772","url":null,"abstract":"<p><p>The recognition of steady-state visual evoked potential (SSVEP) signals in brain-computer interface (BCI) systems is challenging due to the lack of training data and significant inter-subject variability. To address this, we propose a novel unsupervised transfer learning framework that enhances SSVEP recognition without requiring any subject-specific calibration. Our method employs a three-stage pipeline: (1) preprocessing with similarity-aware subject selection and Euclidean alignment to mitigate domain shifts; (2) hybrid feature extraction combining canonical correlation analysis (CCA) and task-related component analysis (TRCA) to enhance signal-to-noise ratio and phase sensitivity; and (3) weighted correlation fusion for robust classification. Extensive evaluations on the Benchmark and BETA datasets demonstrate that our approach achieves state-of-the-art performance, with average accuracies of 83.20% and 69.08% at 1 s data length, respectively-significantly outperforming existing methods like ttCCA and Ensemble-DNN. The highest information transfer rate reaches 157.53 bits min<sup>-1</sup>, underscoring the framework's practical potential for plug-and-play SSVEP-based BCIs.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2025-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145666631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-11DOI: 10.1088/2057-1976/ae2621
Agnese Robustelli Test, Chandra Bortolotto, Sithin Thulasi Seetha, Alessandra Marrocco, Carlotta Pairazzi, Gaia Messana, Leonardo Brizzi, Domenico Zacà, Robert Grimm, Francesca Brero, Manuel Mariani, Raffaella Fiamma Cabini, Giulia Maria Stella, Lorenzo Preda, Alessandro Lascialfari
Objective.Lung cancer remains the leading cause of cancer-related mortality worldwide, with Non-Small Cell Lung Cancer (NSCLC) accounting for approximately 85% of all cases. Programmed cell Death Ligand-1 (PD-L1) is a well-established biomarker that guides immunotherapy in advanced-stage NSCLC, currently evaluated via invasive biopsy procedures. This study aims to develop and validate a non-invasive pipeline for stratifying PD-L1 expression using quantitative analysis of IVIM parameter maps-diffusion (D), pseudo-diffusion (D*), perfusion fraction (pf)-and T1-VIBE MRI acquisitions.Approach.MRI data from 43 NSCLC patients were analysed and labelled as PD-L1 positive (≥1%) or negative (<1%) based on immunohistochemistry exam. After pre-processing, 1,171 radiomic features and 512 deep learning features were obtained. Three feature sets (radiomic, deep learning, and fusion) were tested with Logistic Regression, Random Forest, and XGBoost. Four discriminative features were selected using the Mann-Whitney U-test, and model performance was primarily assessed using the area under the receiver operating characteristic curve (AUC). Robustness was ensured through repeated stratified 5-fold cross-validation, bootstrap-derived confidence intervals, and permutation test.Main Results.Logistic Regression generally demonstrated the highest classification performance, with AUC values ranging from 0.78 to 0.92 across all feature sets. Fusion models outperformed or matched the performance of the best standalone radiomics or deep learning model. Among multisequence MRI, the IVIM-D fusion features yielded the best performance with an AUC of 0.92, followed by IVIM-D* radiomic features that showed a similar AUC of 0.91. For IVIM-pf and T1-VIBE derived features, the fusion model yielded the best AUC values of 0.87 and 0.90, respectively.Significance.The obtained results highlight the potential of a combined radiomic-deep learning approach to effectively detect PD-L1 expression from MRI acquisitions, paving the way for a non-invasive PD-L1 evaluation procedure.
{"title":"Multisequence MRI-driven assessment of PD-L1 expression in non-small cell lung cancer: a pilot study.","authors":"Agnese Robustelli Test, Chandra Bortolotto, Sithin Thulasi Seetha, Alessandra Marrocco, Carlotta Pairazzi, Gaia Messana, Leonardo Brizzi, Domenico Zacà, Robert Grimm, Francesca Brero, Manuel Mariani, Raffaella Fiamma Cabini, Giulia Maria Stella, Lorenzo Preda, Alessandro Lascialfari","doi":"10.1088/2057-1976/ae2621","DOIUrl":"https://doi.org/10.1088/2057-1976/ae2621","url":null,"abstract":"<p><p><i>Objective.</i>Lung cancer remains the leading cause of cancer-related mortality worldwide, with Non-Small Cell Lung Cancer (NSCLC) accounting for approximately 85% of all cases. Programmed cell Death Ligand-1 (PD-L1) is a well-established biomarker that guides immunotherapy in advanced-stage NSCLC, currently evaluated via invasive biopsy procedures. This study aims to develop and validate a non-invasive pipeline for stratifying PD-L1 expression using quantitative analysis of IVIM parameter maps-diffusion (D), pseudo-diffusion (D*), perfusion fraction (pf)-and T1-VIBE MRI acquisitions.<i>Approach.</i>MRI data from 43 NSCLC patients were analysed and labelled as PD-L1 positive (≥1%) or negative (<1%) based on immunohistochemistry exam. After pre-processing, 1,171 radiomic features and 512 deep learning features were obtained. Three feature sets (radiomic, deep learning, and fusion) were tested with Logistic Regression, Random Forest, and XGBoost. Four discriminative features were selected using the Mann-Whitney U-test, and model performance was primarily assessed using the area under the receiver operating characteristic curve (AUC). Robustness was ensured through repeated stratified 5-fold cross-validation, bootstrap-derived confidence intervals, and permutation test.<i>Main Results.</i>Logistic Regression generally demonstrated the highest classification performance, with AUC values ranging from 0.78 to 0.92 across all feature sets. Fusion models outperformed or matched the performance of the best standalone radiomics or deep learning model. Among multisequence MRI, the IVIM-D fusion features yielded the best performance with an AUC of 0.92, followed by IVIM-D* radiomic features that showed a similar AUC of 0.91. For IVIM-pf and T1-VIBE derived features, the fusion model yielded the best AUC values of 0.87 and 0.90, respectively.<i>Significance.</i>The obtained results highlight the potential of a combined radiomic-deep learning approach to effectively detect PD-L1 expression from MRI acquisitions, paving the way for a non-invasive PD-L1 evaluation procedure.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":"12 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2025-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145721043","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-11DOI: 10.1088/2057-1976/ae2688
Tristan K Gaddis, Dusica Cvetkovic, Dae-Myoung Yang, Lili Chen, C-M Charlie Ma
Purpose.Radiodynamic Therapy (RDT) is an emerging technique that enhances the therapeutic effects of radiation by using photosensitizers to amplify tumor cell damage while minimizing harm to normal tissues. Thisin vitroinvestigation compares the biocompatibility and sensitizing efficacy of two candidate photosensitizers, 5-aminolevulinic acid (5-ALA) and acridine orange (AO), in human breast adenocarcinoma (MCF7) and prostate adenocarcinoma (PC3) cell lines.Materials and Methods.MCF7 and PC3 cell lines were cultured and exposed to a range of 5-ALA and AO concentrations to assess biocompatibility using PrestoBlue viability assays. Based on these results, optimal concentrations were selected for irradiation experiments. Cells were then seeded in T-25 flasks and incubated with 5-ALA or AO prior to receiving 2 Gy or 4 Gy of megavoltage photon radiation (18 MV or 45 MV). Clonogenic assays were performed to determine the surviving fractions of the cells.Results. 5-ALA exhibited a broader biocompatibility profile than AO, remaining non-cytotoxic up to 100 μg ml-1. In contrast, AO showed cytotoxic effects above 1 μg ml-1. At 18 MV, limited radiosensitization was observed, except at higher 5-ALA concentrations. However, at 45 MV, both sensitizers significantly reduced cell survival, particularly at 4 Gy. The most pronounced effect was observed with 100 μg ml-15-ALA, which consistently resulted in lower surviving fractions than AO across both cell lines. Each sensitizer demonstrated differing effectiveness depending on the cell line and photon energy used.Conclusions. Both 5-ALA and AO enhanced the cytotoxic effects of radiation, but 5-ALA demonstrated superior biocompatibility and more consistent radiosensitization across both cell lines. Notably, the effectiveness of both sensitizers increased with higher photon energy, reinforcing the importance of beam energy in RDT design. These results underscore the advantages of 5-ALA over AO and highlight the need to optimize both sensitizer selection and radiation energy in clinical applications.
{"title":"An<i>in vitro</i>investigation of 5-aminolevulinic acid and acridine orange as sensitizers in radiodynamic therapy for prostate and breast cancer.","authors":"Tristan K Gaddis, Dusica Cvetkovic, Dae-Myoung Yang, Lili Chen, C-M Charlie Ma","doi":"10.1088/2057-1976/ae2688","DOIUrl":"10.1088/2057-1976/ae2688","url":null,"abstract":"<p><p><i>Purpose.</i>Radiodynamic Therapy (RDT) is an emerging technique that enhances the therapeutic effects of radiation by using photosensitizers to amplify tumor cell damage while minimizing harm to normal tissues. This<i>in vitro</i>investigation compares the biocompatibility and sensitizing efficacy of two candidate photosensitizers, 5-aminolevulinic acid (5-ALA) and acridine orange (AO), in human breast adenocarcinoma (MCF7) and prostate adenocarcinoma (PC3) cell lines.<i>Materials and Methods.</i>MCF7 and PC3 cell lines were cultured and exposed to a range of 5-ALA and AO concentrations to assess biocompatibility using PrestoBlue viability assays. Based on these results, optimal concentrations were selected for irradiation experiments. Cells were then seeded in T-25 flasks and incubated with 5-ALA or AO prior to receiving 2 Gy or 4 Gy of megavoltage photon radiation (18 MV or 45 MV). Clonogenic assays were performed to determine the surviving fractions of the cells.<i>Results</i>. 5-ALA exhibited a broader biocompatibility profile than AO, remaining non-cytotoxic up to 100 μg ml<sup>-1</sup>. In contrast, AO showed cytotoxic effects above 1 μg ml<sup>-1</sup>. At 18 MV, limited radiosensitization was observed, except at higher 5-ALA concentrations. However, at 45 MV, both sensitizers significantly reduced cell survival, particularly at 4 Gy. The most pronounced effect was observed with 100 μg ml<sup>-1</sup>5-ALA, which consistently resulted in lower surviving fractions than AO across both cell lines. Each sensitizer demonstrated differing effectiveness depending on the cell line and photon energy used.<i>Conclusions</i>. Both 5-ALA and AO enhanced the cytotoxic effects of radiation, but 5-ALA demonstrated superior biocompatibility and more consistent radiosensitization across both cell lines. Notably, the effectiveness of both sensitizers increased with higher photon energy, reinforcing the importance of beam energy in RDT design. These results underscore the advantages of 5-ALA over AO and highlight the need to optimize both sensitizer selection and radiation energy in clinical applications.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2025-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145660182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-11DOI: 10.1088/2057-1976/ae2127
Shujun Men, Jiamin Wang, Yanke Li, Yuntian Bai, Lei Zhang, Li Huo
To enable efficient and accurate retinal lesion segmentation on resource-constrained point-of-care Optical Coherence Tomography (OCT) systems, we propose OCTSeg-UNeXt, an ultralight hybrid Convolution-Multilayer Perceptron (Conv-MLP) network optimized for OCT image analysis. Built upon the UNeXt architecture, our model integrates a Depthwise-Augmented Scale Context (DASC) module for adaptive multi-scale feature aggregation, and a Group Fusion Bridge (GFB) to enhance information interaction between the encoder and decoder. Additionally, we employ a deep supervision strategy during training to improve structural learning and accelerate convergence. We evaluated our model using three publicly available OCT datasets. The results of the comparative experiments and ablation experiments show that our method achieves powerful performance in multiple key indicators. Importantly, our method achieves this high performance with only 0.187 million parameters (Params) and 0.053 G Floating-Point Operations Per second (FLOPs), which is significantly lower than UNeXt (0.246M, 0.086G) and UNet (17M, 30.8G). These findings demonstrate the proposed method's strong potential for deployment in Point-of-Care Imaging (POCI) systems, where computational efficiency and model compactness are crucial.
{"title":"OCTSeg-UNeXt: an ultralight hybrid Conv-MLP network for retinal pathology segmentation in point-of-care OCT imaging.","authors":"Shujun Men, Jiamin Wang, Yanke Li, Yuntian Bai, Lei Zhang, Li Huo","doi":"10.1088/2057-1976/ae2127","DOIUrl":"10.1088/2057-1976/ae2127","url":null,"abstract":"<p><p>To enable efficient and accurate retinal lesion segmentation on resource-constrained point-of-care Optical Coherence Tomography (OCT) systems, we propose OCTSeg-UNeXt, an ultralight hybrid Convolution-Multilayer Perceptron (Conv-MLP) network optimized for OCT image analysis. Built upon the UNeXt architecture, our model integrates a Depthwise-Augmented Scale Context (DASC) module for adaptive multi-scale feature aggregation, and a Group Fusion Bridge (GFB) to enhance information interaction between the encoder and decoder. Additionally, we employ a deep supervision strategy during training to improve structural learning and accelerate convergence. We evaluated our model using three publicly available OCT datasets. The results of the comparative experiments and ablation experiments show that our method achieves powerful performance in multiple key indicators. Importantly, our method achieves this high performance with only 0.187 million parameters (Params) and 0.053 G Floating-Point Operations Per second (FLOPs), which is significantly lower than UNeXt (0.246M, 0.086G) and UNet (17M, 30.8G). These findings demonstrate the proposed method's strong potential for deployment in Point-of-Care Imaging (POCI) systems, where computational efficiency and model compactness are crucial.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2025-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145556211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-11DOI: 10.1088/2057-1976/ae2b78
Franklin Sierra, Lina Ruiz, Fabio Martínez Carrillo
Polyps are the main biomarkers for diagnosing colorectal cancer. Their early detection and accurate characterization during colonoscopy procedures rely on expert observations. Nevertheless, such a task is prone to errors, particularly in morphological characterization. This work proposes a multi-task representation capable of segmenting polyps and stratifying their malignancy from individual colonoscopy frames. The approach employs a deep representation based on multi-head cross-attention, refined with morphological characterization learned from independent maps according to the degree of polyp malignancy. The proposed method was validated on the BKAI-IGH dataset, comprising 1200 samples (1000 white-light imaging and 200 NICE samples) with fine-grained segmentation masks. The results show an average IoU of 83.5% and a recall of 94%. Additionally, external dataset validation demonstrated the model's generalization capability. Inspired by conventional expert characterization, the proposed method integrates textural and morphological observations, allowing both tasks, polyp segmentation and the corresponding malignancy stratification. The proposed strategy achieves the state-of-the-art performance in public datasets, showing promising results and demonstrating its ability to generate a polyp representation suitable for multiple tasks.
{"title":"A multi-task cross-attention strategy to segment and classify polyps.","authors":"Franklin Sierra, Lina Ruiz, Fabio Martínez Carrillo","doi":"10.1088/2057-1976/ae2b78","DOIUrl":"https://doi.org/10.1088/2057-1976/ae2b78","url":null,"abstract":"<p><p>Polyps are the main biomarkers for diagnosing colorectal cancer. Their early detection and accurate characterization during colonoscopy procedures rely on expert observations. Nevertheless, such a task is prone to errors, particularly in morphological characterization. This work proposes a multi-task representation capable of segmenting polyps and stratifying their malignancy from individual colonoscopy frames. The approach employs a deep representation based on multi-head cross-attention, refined with morphological characterization learned from independent maps according to the degree of polyp malignancy. The proposed method was validated on the BKAI-IGH dataset, comprising 1200 samples (1000 white-light imaging and 200 NICE samples) with fine-grained segmentation masks. The results show an average IoU of 83.5% and a recall of 94%. Additionally, external dataset validation demonstrated the model's generalization capability. Inspired by conventional expert characterization, the proposed method integrates textural and morphological observations, allowing both tasks, polyp segmentation and the corresponding malignancy stratification. The proposed strategy achieves the state-of-the-art performance in public datasets, showing promising results and demonstrating its ability to generate a polyp representation suitable for multiple tasks.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2025-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145740819","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-10DOI: 10.1088/2057-1976/ae250f
Liang Wei, Yushun Gong, Yunchi Li, Jianjie Wang, Yongqin Li
Photoplethysmography (PPG) is widely used in wearable health monitors for tracking fundamental physiological parameters (e.g., heart rate and blood oxygen saturation) and advancing applications requiring high-quality signals-such as blood pressure assessment and cardiac arrhythmia detection. However, motion artifacts and environmental noise significantly degrade the accuracy of PPG-derived physiological measurements, potentially causing false alarms or delayed diagnoses in longitudinal monitoring cohorts. While signal quality assessment (SQA) provides an effective solution, existing methods show insufficient robustness in ambulatory scenarios. This study concentrates on PPG signal quality detection and proposes a robust SQA algorithm for wearable devices under unrestricted daily activities. PPG and acceleration signals were acquired from 54 participants using a self-made physiological monitoring headband during daily activities, segmented into 35712 non-overlapping 5-second epochs. Each epoch was annotated with: (1) PPG signal quality levels (good: 10817; moderate: 14788; poor: 10107), and (2) activity states classified as sedentary, light, moderate, or vigorous-intensity. The dataset was stratified into training (80%) and testing (20%) subsets to maintain proportional representation. Fourteen discriminative features were extracted from four domains: morphological characteristics, time-frequency distributions, physiological parameters estimation consistency and accuracy, and statistical properties of signal dynamics. Four machine learning algorithms were employed to train models for SQA. The random forest (95.6%) achieved the highest accuracy on the test set, but no significant differences (p = 0.471) compared to support vector machine (95.4%), naive Bayes (94.1%), and BP neural network (95.1%). Additionally, the classification accuracy showed no statistically significant variations (p = 0.648) across light (95.3%), moderate (96.0%), and vigorous activity (100%) when compared to sedentary (95.8%). All features exhibited significant differences (p < 0.05) across high/moderate/poor quality segments in all pairwise comparisons.The results indicate that the proposed feature set achieves robust SQA, maintaining consistently high classification accuracy across all activity intensities. This performance stability enables real-time implementation in wearable devices.
{"title":"Assessing photoplethysmography signal quality for wearable devices during unrestricted daily activities.","authors":"Liang Wei, Yushun Gong, Yunchi Li, Jianjie Wang, Yongqin Li","doi":"10.1088/2057-1976/ae250f","DOIUrl":"10.1088/2057-1976/ae250f","url":null,"abstract":"<p><p>Photoplethysmography (PPG) is widely used in wearable health monitors for tracking fundamental physiological parameters (e.g., heart rate and blood oxygen saturation) and advancing applications requiring high-quality signals-such as blood pressure assessment and cardiac arrhythmia detection. However, motion artifacts and environmental noise significantly degrade the accuracy of PPG-derived physiological measurements, potentially causing false alarms or delayed diagnoses in longitudinal monitoring cohorts. While signal quality assessment (SQA) provides an effective solution, existing methods show insufficient robustness in ambulatory scenarios. This study concentrates on PPG signal quality detection and proposes a robust SQA algorithm for wearable devices under unrestricted daily activities. PPG and acceleration signals were acquired from 54 participants using a self-made physiological monitoring headband during daily activities, segmented into 35712 non-overlapping 5-second epochs. Each epoch was annotated with: (1) PPG signal quality levels (good: 10817; moderate: 14788; poor: 10107), and (2) activity states classified as sedentary, light, moderate, or vigorous-intensity. The dataset was stratified into training (80%) and testing (20%) subsets to maintain proportional representation. Fourteen discriminative features were extracted from four domains: morphological characteristics, time-frequency distributions, physiological parameters estimation consistency and accuracy, and statistical properties of signal dynamics. Four machine learning algorithms were employed to train models for SQA. The random forest (95.6%) achieved the highest accuracy on the test set, but no significant differences (<i>p</i> = 0.471) compared to support vector machine (95.4%), naive Bayes (94.1%), and BP neural network (95.1%). Additionally, the classification accuracy showed no statistically significant variations (<i>p</i> = 0.648) across light (95.3%), moderate (96.0%), and vigorous activity (100%) when compared to sedentary (95.8%). All features exhibited significant differences (p < 0.05) across high/moderate/poor quality segments in all pairwise comparisons.The results indicate that the proposed feature set achieves robust SQA, maintaining consistently high classification accuracy across all activity intensities. This performance stability enables real-time implementation in wearable devices.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2025-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145628773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Optical coherence tomography (OCT), a non-invasive imaging modality, holds significant clinical value in cardiology and ophthalmology. However, its imaging quality is often constrained by inherently limited resolution, thereby affecting diagnostic utility. For OCT-based diagnosis, enhancing perceptual quality that emphasizes human visual recognition ability and diagnostic effectiveness is crucial. Existing super-resolution methods prioritize reconstruction accuracy (e.g., PSNR optimization) but neglect perceptual quality. To address this, we propose a Multi-level Local-Global feature Fusion Generative Adversarial Network (MLGF-GAN) that systematically integrates local details, global contextual information, and multilevel features to fully exploit the recoverable information in the image. The Local Feature Extractor (LFE) employs Coordinate Attention-enhanced convolutional neural network (CNN) for lesion-focused local feature refinement, and the Global Feature Extractor (GFE) employs shifted-window Transformers to model long-range dependencies. The Multi-level Feature Fusion Structure (MFFS) hierarchically aggregates image features and adaptively processes information at different scales. The multi-scale (×2, ×4, ×8) evaluations conducted on coronary and retinal OCT datasets demonstrate that the proposed model achieves highly competitive perceptual quality across all scales while maintaining reconstruction accuracy. The generated OCT super-resolution images exhibit superior texture detail restoration and spectral consistency, contributing to improved accuracy and reliability in clinical assessment. Furthermore, cross-pathology experiments further demonstrate that the proposed model possesses excellent generalization capability.
{"title":"MLGF-GAN: a multi-level local-global feature fusion GAN for OCT image super-resolution.","authors":"Tingting Han, Wenxuan Li, Jixing Han, Jihao Lang, Wenxia Zhang, Wei Xia, Kuiyuan Tao, Wei Wang, Jing Gao, Dandan Qi","doi":"10.1088/2057-1976/ae2623","DOIUrl":"10.1088/2057-1976/ae2623","url":null,"abstract":"<p><p>Optical coherence tomography (OCT), a non-invasive imaging modality, holds significant clinical value in cardiology and ophthalmology. However, its imaging quality is often constrained by inherently limited resolution, thereby affecting diagnostic utility. For OCT-based diagnosis, enhancing perceptual quality that emphasizes human visual recognition ability and diagnostic effectiveness is crucial. Existing super-resolution methods prioritize reconstruction accuracy (e.g., PSNR optimization) but neglect perceptual quality. To address this, we propose a Multi-level Local-Global feature Fusion Generative Adversarial Network (MLGF-GAN) that systematically integrates local details, global contextual information, and multilevel features to fully exploit the recoverable information in the image. The Local Feature Extractor (LFE) employs Coordinate Attention-enhanced convolutional neural network (CNN) for lesion-focused local feature refinement, and the Global Feature Extractor (GFE) employs shifted-window Transformers to model long-range dependencies. The Multi-level Feature Fusion Structure (MFFS) hierarchically aggregates image features and adaptively processes information at different scales. The multi-scale (×2, ×4, ×8) evaluations conducted on coronary and retinal OCT datasets demonstrate that the proposed model achieves highly competitive perceptual quality across all scales while maintaining reconstruction accuracy. The generated OCT super-resolution images exhibit superior texture detail restoration and spectral consistency, contributing to improved accuracy and reliability in clinical assessment. Furthermore, cross-pathology experiments further demonstrate that the proposed model possesses excellent generalization capability.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2025-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145652973","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}