首页 > 最新文献

Biomedical Physics & Engineering Express最新文献

英文 中文
An optimized EEG-based intrinsic brain network for depression detection using differential graph centrality. 一种基于脑电图的基于差分图中心性的抑郁症检测的优化脑内网络。
IF 1.6 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-12-17 DOI: 10.1088/2057-1976/ae2689
Nausheen Ansari, Yusuf Khan, Omar Farooq

Millions of adults suffer from Major Depressive Disorder (MDD), globally. Applying network theory to study functional brain dynamics often use fMRI modality to identify the perturbed connectivity in depressed individuals. However, the weak temporal resolution of fMRI limits its ability to access the fast dynamics of functional connectivity (FC). Therefore, Electroencephalography (EEG), which can track functional brain dynamics every millisecond, may serve as a diagnostic marker to utilizing the dynamics of intrinsic brain networks at the sensor level. This research proposes a unique neural marker for depression detection by analyzing long-range functional neurodynamics between the default mode network (DMN) and visual network (VN) via optimal EEG nodes. While DMN abnormalities in depression are well documented, the interactions between the DMN and VN, which reflect visual imagery at rest, remain unclear. Subsequently, a novel differential graph centrality index is applied to reduce a high-dimensional feature space representing EEG temporal neurodynamics, which produced an optimized brain network for MDD detection. The proposed method achieves an exceptional classification performance with an average accuracy, f1 score, and MCC of 99.76%, 0.998, and 0.9995 for the MODMA and 99.99%, 0.999 and 0.9998 for the HUSM datasets, respectively. The findings of this study suggests that a significant decrease in connection density within the beta band (15-30 Hz) in depressed individuals exhibits disrupted long-range inter-network topology, which could serve as a reliable neural marker for depression detection and monitoring. Furthermore, weak FC links between the DMN and VN indicate disengagement between the DMN and VN, which signifies progressive cognitive decline, weak memory, and disrupted thinking at rest, often accompanied by MDD.

全球有数百万成年人患有重度抑郁症(MDD)。将网络理论应用于脑功能动力学研究中,通常使用功能磁共振成像(fMRI)模式来识别抑郁症个体的连接紊乱。然而,fMRI较弱的时间分辨率限制了其获取功能连接(FC)快速动态的能力。因此,脑电图(EEG)可以跟踪每毫秒的脑功能动态,可以作为在传感器水平上利用脑内在网络动态的诊断标志。本研究通过分析默认模式网络(DMN)和视觉网络(VN)之间的远程功能神经动力学,提出了一种独特的抑郁检测神经标志物。虽然抑郁症的DMN异常已被充分记录,但DMN和VN之间的相互作用(反映休息时的视觉图像)仍不清楚。随后,采用一种新的差分图中心性指数(differential graph centrality index)对表征脑电图时间神经动力学的高维特征空间进行约化,生成了用于MDD检测的优化脑网络。 ;该方法取得了优异的分类性能,MODMA数据集的平均准确率、f1分数和MCC分别为99.76%、0.998和0.9995,HUSM数据集的平均准确率、f1分数和MCC分别为99.99%、0.999和0.9998。本研究结果表明,抑郁症患者β频段(15-30 Hz)内连接密度显著降低,远程网络间拓扑结构被破坏,这可以作为抑郁症检测和监测的可靠神经标志物。此外,DMN和VN之间的弱FC连接表明DMN和VN之间的分离,这意味着进行性认知能力下降,记忆力弱,休息时思维中断,通常伴有MDD。
{"title":"An optimized EEG-based intrinsic brain network for depression detection using differential graph centrality.","authors":"Nausheen Ansari, Yusuf Khan, Omar Farooq","doi":"10.1088/2057-1976/ae2689","DOIUrl":"10.1088/2057-1976/ae2689","url":null,"abstract":"<p><p>Millions of adults suffer from Major Depressive Disorder (MDD), globally. Applying network theory to study functional brain dynamics often use fMRI modality to identify the perturbed connectivity in depressed individuals. However, the weak temporal resolution of fMRI limits its ability to access the fast dynamics of functional connectivity (FC). Therefore, Electroencephalography (EEG), which can track functional brain dynamics every millisecond, may serve as a diagnostic marker to utilizing the dynamics of intrinsic brain networks at the sensor level. This research proposes a unique neural marker for depression detection by analyzing long-range functional neurodynamics between the default mode network (DMN) and visual network (VN) via optimal EEG nodes. While DMN abnormalities in depression are well documented, the interactions between the DMN and VN, which reflect visual imagery at rest, remain unclear. Subsequently, a novel differential graph centrality index is applied to reduce a high-dimensional feature space representing EEG temporal neurodynamics, which produced an optimized brain network for MDD detection. The proposed method achieves an exceptional classification performance with an average accuracy, f1 score, and MCC of 99.76%, 0.998, and 0.9995 for the MODMA and 99.99%, 0.999 and 0.9998 for the HUSM datasets, respectively. The findings of this study suggests that a significant decrease in connection density within the beta band (15-30 Hz) in depressed individuals exhibits disrupted long-range inter-network topology, which could serve as a reliable neural marker for depression detection and monitoring. Furthermore, weak FC links between the DMN and VN indicate disengagement between the DMN and VN, which signifies progressive cognitive decline, weak memory, and disrupted thinking at rest, often accompanied by MDD.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2025-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145660172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluating corticokinematic coherence using electroencephalography and human pose estimation. 利用脑电图和人体姿势估计评估皮质运动一致性。
IF 1.6 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-12-16 DOI: 10.1088/2057-1976/ae27d5
E A Lorenz, X Su, N Skjæret-Maroni

Objective.While peripheral mechanisms of proprioception are well understood, the cortical processing of its feedback during dynamic and complex movements remains less clear. Corticokinematic coherence (CKC), which quantifies the coupling between limb movements and sensorimotor cortex activity, offers a way to investigate this cortical processing. However, ecologically valid CKC assessment poses technical challenges. Thus, by integrating Electroencephalography (EEG) with Human Pose Estimation (HPE), this study validates the feasibility and validity of a novel methodology for measuring CKC during upper-limb movements in real-world and virtual reality (VR) settings.Approach.Nine healthy adults performed repetitive finger-tapping (1 Hz) and reaching (0.5 Hz) tasks in real and VR settings. Their execution was recorded temporally synchronized using a 64-channel EEG, optical marker-based motion capture, and monocular deep-learning-based HPE via Mediapipe. Alongside the CKC, the kinematic agreement between both systems was assessed.Main results.CKC was detected using both marker-based and HPE-based kinematics across tasks and environments, with significant coherence observed in most participants. HPE-derived CKC closely matched marker-based measurements for most joints, exhibiting strong reliability and equivalent coherence magnitudes between real and VR conditions.Significance.This study validates a noninvasive and portable EEG-HPE approach for assessing cortical proprioceptive processing in ecologically valid settings, enabling broader clinical and rehabilitation applications.

目标。虽然本体感觉的外周机制已经被很好地理解,但在动态和复杂运动中,其反馈的皮层处理仍然不太清楚。皮质运动一致性(CKC)量化了肢体运动和感觉运动皮层活动之间的耦合,为研究这种皮层处理提供了一种方法。然而,生态有效的CKC评价提出了技术挑战。因此,通过将脑电图(EEG)与人体姿势估计(HPE)相结合,本研究验证了一种在现实世界和虚拟现实(VR)环境中测量上肢运动时CKC的新方法的可行性和有效性。 textit{方法。9名健康成年人在真实和虚拟现实环境中进行了重复的手指敲击(1hz)和伸手(0.5 Hz)任务。使用64通道脑电图、基于光学标记的运动捕捉和通过Mediapipe的基于rgb的单目HPE记录它们的执行时间同步。与CKC一起,评估了两个系统之间的运动学一致性。主要的结果。在任务和环境中使用基于标记和基于hpe的运动学来检测CKC,在大多数参与者中观察到显著的一致性。hpe衍生的CKC与大多数关节的基于标记的测量结果密切匹配,在真实和VR条件下显示出很强的可靠性和等效相干度。意义:本研究验证了一种无创脑电图- hpe方法,用于评估生态有效环境下皮层本体感觉加工,从而实现更广泛的临床和康复应用。
{"title":"Evaluating corticokinematic coherence using electroencephalography and human pose estimation.","authors":"E A Lorenz, X Su, N Skjæret-Maroni","doi":"10.1088/2057-1976/ae27d5","DOIUrl":"10.1088/2057-1976/ae27d5","url":null,"abstract":"<p><p><i>Objective.</i>While peripheral mechanisms of proprioception are well understood, the cortical processing of its feedback during dynamic and complex movements remains less clear. Corticokinematic coherence (CKC), which quantifies the coupling between limb movements and sensorimotor cortex activity, offers a way to investigate this cortical processing. However, ecologically valid CKC assessment poses technical challenges. Thus, by integrating Electroencephalography (EEG) with Human Pose Estimation (HPE), this study validates the feasibility and validity of a novel methodology for measuring CKC during upper-limb movements in real-world and virtual reality (VR) settings.<i>Approach.</i>Nine healthy adults performed repetitive finger-tapping (1 Hz) and reaching (0.5 Hz) tasks in real and VR settings. Their execution was recorded temporally synchronized using a 64-channel EEG, optical marker-based motion capture, and monocular deep-learning-based HPE via Mediapipe. Alongside the CKC, the kinematic agreement between both systems was assessed.<i>Main results.</i>CKC was detected using both marker-based and HPE-based kinematics across tasks and environments, with significant coherence observed in most participants. HPE-derived CKC closely matched marker-based measurements for most joints, exhibiting strong reliability and equivalent coherence magnitudes between real and VR conditions.<i>Significance.</i>This study validates a noninvasive and portable EEG-HPE approach for assessing cortical proprioceptive processing in ecologically valid settings, enabling broader clinical and rehabilitation applications.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2025-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145676290","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
2D Boundary Shape Detection Based on Camera for Enhanced Electrode Placement in Lung Electrical Impedance Tomography. 基于相机的二维边界形状检测在肺电阻抗断层扫描中增强电极放置。
IF 1.6 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-12-15 DOI: 10.1088/2057-1976/ae2c8e
Leonard Brainaparte Kwee, Marlin Ramadhan Baidillah, Muhammad Nurul Puji, Winda Astuti

Accurate electrode placement is critical for improving image fidelity in lung Electrical Impedance Tomography (EIT), yet current systems rely on simplified circular templates that neglect patient-specific anatomical variation. This paper presents a novel, low-cost pipeline that uses smartphone-based photogrammetry to generate individualized 3D torso reconstructions for boundary-aligned electrode placement. The method includes automated video frame extraction, mesh post-processing, interactive 2D boundary extraction, real-world anatomical scaling, and both manual and automatic electrode detection. We evaluate two photogrammetry pipelines - commercial (RealityCapture) and open-source (Meshroom + MeshLab) - across five subjects including a mannequin and four human participants. Results demonstrate sub-centimeter Mean Absolute Error (MAE 0.42-0.60 cm) and Mean Percentage Error (MPE 8.56-11.51%) in electrode placement accuracy. Repeatability analysis shows good consistency with Coefficient of Variation below 15% for MPE and 19% for MAE. The generated subject-specific finite element meshes achieve 98.79% accuracy in cross-sectional area compared to direct measurements. While the current implementation requires 15-30 minutes processing time and multiple software tools, it establishes a foundation for more precise and personalized bioimpedance imaging that could benefit both clinical EIT and broader applications in neurological and industrial domains.

准确的电极放置对于提高肺电阻抗断层扫描(EIT)的图像保真度至关重要,但目前的系统依赖于简化的圆形模板,忽视了患者特定的解剖变化。本文提出了一种新颖的低成本管道,该管道使用基于智能手机的摄影测量来生成个性化的3D躯干重建,用于边界对齐电极放置。该方法包括自动视频帧提取、网格后处理、交互式二维边界提取、真实世界解剖缩放以及手动和自动电极检测。我们评估了两个摄影测量管道-商业(RealityCapture)和开源(Meshroom + MeshLab) -跨越五个主题,包括一个人体模型和四个人类参与者。结果表明,电极放置精度的平均绝对误差(MAE)为0.42 ~ 0.60 cm,平均百分比误差(MPE)为8.56 ~ 11.51%。重复性分析结果表明,MPE变异系数小于15%,MAE变异系数小于19%,一致性较好。与直接测量相比,生成的特定对象有限元网格在横截面积上的精度达到98.79%。虽然目前的实现需要15-30分钟的处理时间和多个软件工具,但它为更精确和个性化的生物阻抗成像奠定了基础,这将有利于临床EIT以及神经学和工业领域的更广泛应用。
{"title":"2D Boundary Shape Detection Based on Camera for Enhanced Electrode Placement in Lung Electrical Impedance Tomography.","authors":"Leonard Brainaparte Kwee, Marlin Ramadhan Baidillah, Muhammad Nurul Puji, Winda Astuti","doi":"10.1088/2057-1976/ae2c8e","DOIUrl":"https://doi.org/10.1088/2057-1976/ae2c8e","url":null,"abstract":"<p><p>Accurate electrode placement is critical for improving image fidelity in lung Electrical Impedance Tomography (EIT), yet current systems rely on simplified circular templates that neglect patient-specific anatomical variation. This paper presents a novel, low-cost pipeline that uses smartphone-based photogrammetry to generate individualized 3D torso reconstructions for boundary-aligned electrode placement. The method includes automated video frame extraction, mesh post-processing, interactive 2D boundary extraction, real-world anatomical scaling, and both manual and automatic electrode detection. We evaluate two photogrammetry pipelines - commercial (RealityCapture) and open-source (Meshroom + MeshLab) - across five subjects including a mannequin and four human participants. Results demonstrate sub-centimeter Mean Absolute Error (MAE 0.42-0.60 cm) and Mean Percentage Error (MPE 8.56-11.51%) in electrode placement accuracy. Repeatability analysis shows good consistency with Coefficient of Variation below 15% for MPE and 19% for MAE. The generated subject-specific finite element meshes achieve 98.79% accuracy in cross-sectional area compared to direct measurements. While the current implementation requires 15-30 minutes processing time and multiple software tools, it establishes a foundation for more precise and personalized bioimpedance imaging that could benefit both clinical EIT and broader applications in neurological and industrial domains.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2025-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145761961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cross-domain correlation analysis to improve SSVEP signals recognition in brain-computer interfaces. 跨域相关分析提高脑机接口中SSVEP信号的识别。
IF 1.6 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-12-15 DOI: 10.1088/2057-1976/ae2772
Kaiwei Hu, Yong Wang, Kaixiang Tu, Hongxiang Guo, Jun Yan

The recognition of steady-state visual evoked potential (SSVEP) signals in brain-computer interface (BCI) systems is challenging due to the lack of training data and significant inter-subject variability. To address this, we propose a novel unsupervised transfer learning framework that enhances SSVEP recognition without requiring any subject-specific calibration. Our method employs a three-stage pipeline: (1) preprocessing with similarity-aware subject selection and Euclidean alignment to mitigate domain shifts; (2) hybrid feature extraction combining canonical correlation analysis (CCA) and task-related component analysis (TRCA) to enhance signal-to-noise ratio and phase sensitivity; and (3) weighted correlation fusion for robust classification. Extensive evaluations on the Benchmark and BETA datasets demonstrate that our approach achieves state-of-the-art performance, with average accuracies of 83.20% and 69.08% at 1 s data length, respectively-significantly outperforming existing methods like ttCCA and Ensemble-DNN. The highest information transfer rate reaches 157.53 bits min-1, underscoring the framework's practical potential for plug-and-play SSVEP-based BCIs.

由于缺乏训练数据和显著的主体间变异性,脑机接口(BCI)系统中稳态视觉诱发电位(SSVEP)信号的识别具有挑战性。为了解决这个问题,我们提出了一个新的无监督迁移学习框架,该框架可以增强SSVEP识别,而无需任何特定主题的校准。我们的方法采用了三个阶段的流水线:(1)预处理,采用相似性感知主题选择和欧几里得对齐来减轻域偏移;(2)结合典型相关分析(CCA)和任务相关分量分析(TRCA)的混合特征提取,提高信噪比和相位灵敏度;(3)加权相关融合实现鲁棒分类。对Benchmark和BETA数据集的广泛评估表明,我们的方法达到了最先进的性能,在15个数据长度下的平均准确率分别为83.20%和69.08%,显著优于现有的方法,如ttCCA和Ensemble-DNN。最高的信息传输速率达到157.53位/分钟,强调了该框架在即插即用的基于ssvep的bci中的实际潜力。
{"title":"Cross-domain correlation analysis to improve SSVEP signals recognition in brain-computer interfaces.","authors":"Kaiwei Hu, Yong Wang, Kaixiang Tu, Hongxiang Guo, Jun Yan","doi":"10.1088/2057-1976/ae2772","DOIUrl":"10.1088/2057-1976/ae2772","url":null,"abstract":"<p><p>The recognition of steady-state visual evoked potential (SSVEP) signals in brain-computer interface (BCI) systems is challenging due to the lack of training data and significant inter-subject variability. To address this, we propose a novel unsupervised transfer learning framework that enhances SSVEP recognition without requiring any subject-specific calibration. Our method employs a three-stage pipeline: (1) preprocessing with similarity-aware subject selection and Euclidean alignment to mitigate domain shifts; (2) hybrid feature extraction combining canonical correlation analysis (CCA) and task-related component analysis (TRCA) to enhance signal-to-noise ratio and phase sensitivity; and (3) weighted correlation fusion for robust classification. Extensive evaluations on the Benchmark and BETA datasets demonstrate that our approach achieves state-of-the-art performance, with average accuracies of 83.20% and 69.08% at 1 s data length, respectively-significantly outperforming existing methods like ttCCA and Ensemble-DNN. The highest information transfer rate reaches 157.53 bits min<sup>-1</sup>, underscoring the framework's practical potential for plug-and-play SSVEP-based BCIs.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2025-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145666631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multisequence MRI-driven assessment of PD-L1 expression in non-small cell lung cancer: a pilot study. 非小细胞肺癌中PD-L1表达的多序列mri驱动评估:一项初步研究。
IF 1.6 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-12-11 DOI: 10.1088/2057-1976/ae2621
Agnese Robustelli Test, Chandra Bortolotto, Sithin Thulasi Seetha, Alessandra Marrocco, Carlotta Pairazzi, Gaia Messana, Leonardo Brizzi, Domenico Zacà, Robert Grimm, Francesca Brero, Manuel Mariani, Raffaella Fiamma Cabini, Giulia Maria Stella, Lorenzo Preda, Alessandro Lascialfari

Objective.Lung cancer remains the leading cause of cancer-related mortality worldwide, with Non-Small Cell Lung Cancer (NSCLC) accounting for approximately 85% of all cases. Programmed cell Death Ligand-1 (PD-L1) is a well-established biomarker that guides immunotherapy in advanced-stage NSCLC, currently evaluated via invasive biopsy procedures. This study aims to develop and validate a non-invasive pipeline for stratifying PD-L1 expression using quantitative analysis of IVIM parameter maps-diffusion (D), pseudo-diffusion (D*), perfusion fraction (pf)-and T1-VIBE MRI acquisitions.Approach.MRI data from 43 NSCLC patients were analysed and labelled as PD-L1 positive (≥1%) or negative (<1%) based on immunohistochemistry exam. After pre-processing, 1,171 radiomic features and 512 deep learning features were obtained. Three feature sets (radiomic, deep learning, and fusion) were tested with Logistic Regression, Random Forest, and XGBoost. Four discriminative features were selected using the Mann-Whitney U-test, and model performance was primarily assessed using the area under the receiver operating characteristic curve (AUC). Robustness was ensured through repeated stratified 5-fold cross-validation, bootstrap-derived confidence intervals, and permutation test.Main Results.Logistic Regression generally demonstrated the highest classification performance, with AUC values ranging from 0.78 to 0.92 across all feature sets. Fusion models outperformed or matched the performance of the best standalone radiomics or deep learning model. Among multisequence MRI, the IVIM-D fusion features yielded the best performance with an AUC of 0.92, followed by IVIM-D* radiomic features that showed a similar AUC of 0.91. For IVIM-pf and T1-VIBE derived features, the fusion model yielded the best AUC values of 0.87 and 0.90, respectively.Significance.The obtained results highlight the potential of a combined radiomic-deep learning approach to effectively detect PD-L1 expression from MRI acquisitions, paving the way for a non-invasive PD-L1 evaluation procedure.

目标。肺癌仍然是全球癌症相关死亡的主要原因,非小细胞肺癌(NSCLC)约占所有病例的85%。程序性细胞死亡配体-1 (PD-L1)是一种成熟的生物标志物,指导晚期非小细胞肺癌的免疫治疗,目前通过侵入性活检程序进行评估。本研究旨在通过定量分析IVIM参数图-扩散(D)、伪扩散(D*)、灌注分数(pf)和T1-VIBE MRI获取,建立和验证一种无创的PD-L1表达分层管道。方法:分析43例NSCLC患者的MRI数据,并将其标记为PD-L1阳性(≥1%)或阴性(主要结果)。逻辑回归通常表现出最高的分类性能,所有特征集的AUC值在0.78到0.92之间。融合模型的表现优于或与最好的独立放射组学或深度学习模型的表现相当。在多序列MRI中,IVIM-D融合特征的AUC最佳,为0.92,其次是IVIM-D*放射学特征,AUC相似,为0.91。对于IVIM-pf和T1-VIBE衍生的特征,融合模型的最佳AUC值分别为0.87和0.90。意义:获得的结果突出了放射学-深度学习联合方法在有效检测MRI采集的PD-L1表达方面的潜力,为非侵入性PD-L1评估程序铺平了道路。
{"title":"Multisequence MRI-driven assessment of PD-L1 expression in non-small cell lung cancer: a pilot study.","authors":"Agnese Robustelli Test, Chandra Bortolotto, Sithin Thulasi Seetha, Alessandra Marrocco, Carlotta Pairazzi, Gaia Messana, Leonardo Brizzi, Domenico Zacà, Robert Grimm, Francesca Brero, Manuel Mariani, Raffaella Fiamma Cabini, Giulia Maria Stella, Lorenzo Preda, Alessandro Lascialfari","doi":"10.1088/2057-1976/ae2621","DOIUrl":"https://doi.org/10.1088/2057-1976/ae2621","url":null,"abstract":"<p><p><i>Objective.</i>Lung cancer remains the leading cause of cancer-related mortality worldwide, with Non-Small Cell Lung Cancer (NSCLC) accounting for approximately 85% of all cases. Programmed cell Death Ligand-1 (PD-L1) is a well-established biomarker that guides immunotherapy in advanced-stage NSCLC, currently evaluated via invasive biopsy procedures. This study aims to develop and validate a non-invasive pipeline for stratifying PD-L1 expression using quantitative analysis of IVIM parameter maps-diffusion (D), pseudo-diffusion (D*), perfusion fraction (pf)-and T1-VIBE MRI acquisitions.<i>Approach.</i>MRI data from 43 NSCLC patients were analysed and labelled as PD-L1 positive (≥1%) or negative (<1%) based on immunohistochemistry exam. After pre-processing, 1,171 radiomic features and 512 deep learning features were obtained. Three feature sets (radiomic, deep learning, and fusion) were tested with Logistic Regression, Random Forest, and XGBoost. Four discriminative features were selected using the Mann-Whitney U-test, and model performance was primarily assessed using the area under the receiver operating characteristic curve (AUC). Robustness was ensured through repeated stratified 5-fold cross-validation, bootstrap-derived confidence intervals, and permutation test.<i>Main Results.</i>Logistic Regression generally demonstrated the highest classification performance, with AUC values ranging from 0.78 to 0.92 across all feature sets. Fusion models outperformed or matched the performance of the best standalone radiomics or deep learning model. Among multisequence MRI, the IVIM-D fusion features yielded the best performance with an AUC of 0.92, followed by IVIM-D* radiomic features that showed a similar AUC of 0.91. For IVIM-pf and T1-VIBE derived features, the fusion model yielded the best AUC values of 0.87 and 0.90, respectively.<i>Significance.</i>The obtained results highlight the potential of a combined radiomic-deep learning approach to effectively detect PD-L1 expression from MRI acquisitions, paving the way for a non-invasive PD-L1 evaluation procedure.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":"12 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2025-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145721043","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Anin vitroinvestigation of 5-aminolevulinic acid and acridine orange as sensitizers in radiodynamic therapy for prostate and breast cancer. 5-氨基乙酰丙酸和吖啶橙作为增敏剂在前列腺癌和乳腺癌放射治疗中的体外研究。
IF 1.6 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-12-11 DOI: 10.1088/2057-1976/ae2688
Tristan K Gaddis, Dusica Cvetkovic, Dae-Myoung Yang, Lili Chen, C-M Charlie Ma

Purpose.Radiodynamic Therapy (RDT) is an emerging technique that enhances the therapeutic effects of radiation by using photosensitizers to amplify tumor cell damage while minimizing harm to normal tissues. Thisin vitroinvestigation compares the biocompatibility and sensitizing efficacy of two candidate photosensitizers, 5-aminolevulinic acid (5-ALA) and acridine orange (AO), in human breast adenocarcinoma (MCF7) and prostate adenocarcinoma (PC3) cell lines.Materials and Methods.MCF7 and PC3 cell lines were cultured and exposed to a range of 5-ALA and AO concentrations to assess biocompatibility using PrestoBlue viability assays. Based on these results, optimal concentrations were selected for irradiation experiments. Cells were then seeded in T-25 flasks and incubated with 5-ALA or AO prior to receiving 2 Gy or 4 Gy of megavoltage photon radiation (18 MV or 45 MV). Clonogenic assays were performed to determine the surviving fractions of the cells.Results. 5-ALA exhibited a broader biocompatibility profile than AO, remaining non-cytotoxic up to 100 μg ml-1. In contrast, AO showed cytotoxic effects above 1 μg ml-1. At 18 MV, limited radiosensitization was observed, except at higher 5-ALA concentrations. However, at 45 MV, both sensitizers significantly reduced cell survival, particularly at 4 Gy. The most pronounced effect was observed with 100 μg ml-15-ALA, which consistently resulted in lower surviving fractions than AO across both cell lines. Each sensitizer demonstrated differing effectiveness depending on the cell line and photon energy used.Conclusions. Both 5-ALA and AO enhanced the cytotoxic effects of radiation, but 5-ALA demonstrated superior biocompatibility and more consistent radiosensitization across both cell lines. Notably, the effectiveness of both sensitizers increased with higher photon energy, reinforcing the importance of beam energy in RDT design. These results underscore the advantages of 5-ALA over AO and highlight the need to optimize both sensitizer selection and radiation energy in clinical applications.

目的:放射动力学治疗(RDT)是一种新兴的技术,通过使用光敏剂来放大肿瘤细胞的损伤,同时最大限度地减少对正常组织的伤害,从而提高辐射的治疗效果。本实验比较了5-氨基乙酰丙酸(5-ALA)和吖啶橙(AO)两种候选光敏剂在人乳腺腺癌(MCF7)和前列腺腺癌(PC3)细胞系中的生物相容性和增敏效果。材料和方法:培养MCF7和PC3细胞系,并将其暴露于不同浓度的5-ALA和AO中,使用PrestoBlue活性测定法评估其生物相容性。在此基础上,选择了辐照实验的最佳浓度。然后将细胞接种于T-25烧瓶中,在接受2 Gy或4 Gy的超高电压光子辐射(18 MV或45 MV)之前,用5-ALA或AO孵育。进行克隆实验以确定细胞的存活部分。结果:5-ALA表现出比AO更广泛的生物相容性,在100 μ g/mL时保持无细胞毒性。相比之下,AO在1µg/mL以上表现出细胞毒作用。在18 MV下,除了较高的5-ALA浓度外,观察到有限的放射增敏。然而,在45毫伏时,两种增敏剂都显著降低了细胞存活率,尤其是在4毫伏时。在100µg/mL 5-ALA中观察到的效果最为明显,在两种细胞系中,其存活分数始终低于AO。每个敏化剂显示不同的有效性取决于细胞系和光子能量的使用。结论:5-ALA和AO都增强了辐射的细胞毒性作用,但5-ALA在两种细胞系中表现出更好的生物相容性和更一致的放射致敏性。值得注意的是,两种敏化剂的效率随着光子能量的增加而增加,这加强了光束能量在RDT设计中的重要性。这些结果强调了5-ALA相对于AO的优势,并强调了在临床应用中优化增敏剂选择和辐射能量的必要性。
{"title":"An<i>in vitro</i>investigation of 5-aminolevulinic acid and acridine orange as sensitizers in radiodynamic therapy for prostate and breast cancer.","authors":"Tristan K Gaddis, Dusica Cvetkovic, Dae-Myoung Yang, Lili Chen, C-M Charlie Ma","doi":"10.1088/2057-1976/ae2688","DOIUrl":"10.1088/2057-1976/ae2688","url":null,"abstract":"<p><p><i>Purpose.</i>Radiodynamic Therapy (RDT) is an emerging technique that enhances the therapeutic effects of radiation by using photosensitizers to amplify tumor cell damage while minimizing harm to normal tissues. This<i>in vitro</i>investigation compares the biocompatibility and sensitizing efficacy of two candidate photosensitizers, 5-aminolevulinic acid (5-ALA) and acridine orange (AO), in human breast adenocarcinoma (MCF7) and prostate adenocarcinoma (PC3) cell lines.<i>Materials and Methods.</i>MCF7 and PC3 cell lines were cultured and exposed to a range of 5-ALA and AO concentrations to assess biocompatibility using PrestoBlue viability assays. Based on these results, optimal concentrations were selected for irradiation experiments. Cells were then seeded in T-25 flasks and incubated with 5-ALA or AO prior to receiving 2 Gy or 4 Gy of megavoltage photon radiation (18 MV or 45 MV). Clonogenic assays were performed to determine the surviving fractions of the cells.<i>Results</i>. 5-ALA exhibited a broader biocompatibility profile than AO, remaining non-cytotoxic up to 100 μg ml<sup>-1</sup>. In contrast, AO showed cytotoxic effects above 1 μg ml<sup>-1</sup>. At 18 MV, limited radiosensitization was observed, except at higher 5-ALA concentrations. However, at 45 MV, both sensitizers significantly reduced cell survival, particularly at 4 Gy. The most pronounced effect was observed with 100 μg ml<sup>-1</sup>5-ALA, which consistently resulted in lower surviving fractions than AO across both cell lines. Each sensitizer demonstrated differing effectiveness depending on the cell line and photon energy used.<i>Conclusions</i>. Both 5-ALA and AO enhanced the cytotoxic effects of radiation, but 5-ALA demonstrated superior biocompatibility and more consistent radiosensitization across both cell lines. Notably, the effectiveness of both sensitizers increased with higher photon energy, reinforcing the importance of beam energy in RDT design. These results underscore the advantages of 5-ALA over AO and highlight the need to optimize both sensitizer selection and radiation energy in clinical applications.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2025-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145660182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
OCTSeg-UNeXt: an ultralight hybrid Conv-MLP network for retinal pathology segmentation in point-of-care OCT imaging. octsg - unext:一种用于即时OCT成像视网膜病理分割的超轻型混合卷积- mlp网络。
IF 1.6 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-12-11 DOI: 10.1088/2057-1976/ae2127
Shujun Men, Jiamin Wang, Yanke Li, Yuntian Bai, Lei Zhang, Li Huo

To enable efficient and accurate retinal lesion segmentation on resource-constrained point-of-care Optical Coherence Tomography (OCT) systems, we propose OCTSeg-UNeXt, an ultralight hybrid Convolution-Multilayer Perceptron (Conv-MLP) network optimized for OCT image analysis. Built upon the UNeXt architecture, our model integrates a Depthwise-Augmented Scale Context (DASC) module for adaptive multi-scale feature aggregation, and a Group Fusion Bridge (GFB) to enhance information interaction between the encoder and decoder. Additionally, we employ a deep supervision strategy during training to improve structural learning and accelerate convergence. We evaluated our model using three publicly available OCT datasets. The results of the comparative experiments and ablation experiments show that our method achieves powerful performance in multiple key indicators. Importantly, our method achieves this high performance with only 0.187 million parameters (Params) and 0.053 G Floating-Point Operations Per second (FLOPs), which is significantly lower than UNeXt (0.246M, 0.086G) and UNet (17M, 30.8G). These findings demonstrate the proposed method's strong potential for deployment in Point-of-Care Imaging (POCI) systems, where computational efficiency and model compactness are crucial.

为了在资源受限的医疗点光学相干断层扫描(OCT)系统上实现高效、准确的视网膜病变分割,我们提出了octsg - unext,这是一种针对OCT图像分析优化的超轻型混合卷积多层感知器(convl - mlp)网络。基于UNeXt架构,我们的模型集成了深度增强尺度上下文(DASC)模块,用于自适应多尺度特征聚合,以及组融合桥(GFB),以增强编码器和解码器之间的信息交互。此外,我们在训练过程中采用深度监督策略来改善结构学习并加速收敛。我们使用三个公开可用的OCT数据集来评估我们的模型。对比实验和烧蚀实验结果表明,该方法在多个关键指标上都取得了较好的性能。重要的是,我们的方法仅以0.18.7万个参数(Params)和0.053 G浮点运算每秒(FLOPs)实现了这种高性能,显著低于UNeXt (0.246M, 0.086G)和UNet (17M, 30.8G)。这些发现证明了所提出的方法在POCI系统中部署的强大潜力,在POCI系统中,计算效率和模型紧凑性至关重要。
{"title":"OCTSeg-UNeXt: an ultralight hybrid Conv-MLP network for retinal pathology segmentation in point-of-care OCT imaging.","authors":"Shujun Men, Jiamin Wang, Yanke Li, Yuntian Bai, Lei Zhang, Li Huo","doi":"10.1088/2057-1976/ae2127","DOIUrl":"10.1088/2057-1976/ae2127","url":null,"abstract":"<p><p>To enable efficient and accurate retinal lesion segmentation on resource-constrained point-of-care Optical Coherence Tomography (OCT) systems, we propose OCTSeg-UNeXt, an ultralight hybrid Convolution-Multilayer Perceptron (Conv-MLP) network optimized for OCT image analysis. Built upon the UNeXt architecture, our model integrates a Depthwise-Augmented Scale Context (DASC) module for adaptive multi-scale feature aggregation, and a Group Fusion Bridge (GFB) to enhance information interaction between the encoder and decoder. Additionally, we employ a deep supervision strategy during training to improve structural learning and accelerate convergence. We evaluated our model using three publicly available OCT datasets. The results of the comparative experiments and ablation experiments show that our method achieves powerful performance in multiple key indicators. Importantly, our method achieves this high performance with only 0.187 million parameters (Params) and 0.053 G Floating-Point Operations Per second (FLOPs), which is significantly lower than UNeXt (0.246M, 0.086G) and UNet (17M, 30.8G). These findings demonstrate the proposed method's strong potential for deployment in Point-of-Care Imaging (POCI) systems, where computational efficiency and model compactness are crucial.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2025-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145556211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A multi-task cross-attention strategy to segment and classify polyps. 息肉的多任务交叉注意分割和分类策略。
IF 1.6 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-12-11 DOI: 10.1088/2057-1976/ae2b78
Franklin Sierra, Lina Ruiz, Fabio Martínez Carrillo

Polyps are the main biomarkers for diagnosing colorectal cancer. Their early detection and accurate characterization during colonoscopy procedures rely on expert observations. Nevertheless, such a task is prone to errors, particularly in morphological characterization. This work proposes a multi-task representation capable of segmenting polyps and stratifying their malignancy from individual colonoscopy frames. The approach employs a deep representation based on multi-head cross-attention, refined with morphological characterization learned from independent maps according to the degree of polyp malignancy. The proposed method was validated on the BKAI-IGH dataset, comprising 1200 samples (1000 white-light imaging and 200 NICE samples) with fine-grained segmentation masks. The results show an average IoU of 83.5% and a recall of 94%. Additionally, external dataset validation demonstrated the model's generalization capability. Inspired by conventional expert characterization, the proposed method integrates textural and morphological observations, allowing both tasks, polyp segmentation and the corresponding malignancy stratification. The proposed strategy achieves the state-of-the-art performance in public datasets, showing promising results and demonstrating its ability to generate a polyp representation suitable for multiple tasks.

息肉是诊断结直肠癌的主要生物标志物。结肠镜检查过程中的早期发现和准确表征依赖于专家观察。然而,这样的任务很容易出错,特别是在形态学表征方面。这项工作提出了一种多任务表示,能够从单个结肠镜检查框架中分割息肉并分层其恶性肿瘤。该方法采用基于多头交叉注意的深度表示,并根据息肉恶性程度从独立地图中学习形态学特征进行细化。该方法在BKAI-IGH数据集上进行了验证,该数据集包含1200个样本(1000个白光成像样本和200个NICE样本),具有细粒度分割掩模。结果显示,平均欠条率为83.5%,召回率为94%。此外,外部数据集验证证明了模型的泛化能力。受传统专家表征的启发,提出的方法将纹理和形态学观察结合起来,允许息肉分割和相应的恶性分层这两个任务。所提出的策略在公共数据集中实现了最先进的性能,显示出有希望的结果,并证明了其生成适合于多任务的息肉表示的能力。
{"title":"A multi-task cross-attention strategy to segment and classify polyps.","authors":"Franklin Sierra, Lina Ruiz, Fabio Martínez Carrillo","doi":"10.1088/2057-1976/ae2b78","DOIUrl":"https://doi.org/10.1088/2057-1976/ae2b78","url":null,"abstract":"<p><p>Polyps are the main biomarkers for diagnosing colorectal cancer. Their early detection and accurate characterization during colonoscopy procedures rely on expert observations. Nevertheless, such a task is prone to errors, particularly in morphological characterization. This work proposes a multi-task representation capable of segmenting polyps and stratifying their malignancy from individual colonoscopy frames. The approach employs a deep representation based on multi-head cross-attention, refined with morphological characterization learned from independent maps according to the degree of polyp malignancy. The proposed method was validated on the BKAI-IGH dataset, comprising 1200 samples (1000 white-light imaging and 200 NICE samples) with fine-grained segmentation masks. The results show an average IoU of 83.5% and a recall of 94%. Additionally, external dataset validation demonstrated the model's generalization capability. Inspired by conventional expert characterization, the proposed method integrates textural and morphological observations, allowing both tasks, polyp segmentation and the corresponding malignancy stratification. The proposed strategy achieves the state-of-the-art performance in public datasets, showing promising results and demonstrating its ability to generate a polyp representation suitable for multiple tasks.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2025-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145740819","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Assessing photoplethysmography signal quality for wearable devices during unrestricted daily activities. 在不受限制的日常活动中评估可穿戴设备的光电容积脉搏波信号质量。
IF 1.6 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-12-10 DOI: 10.1088/2057-1976/ae250f
Liang Wei, Yushun Gong, Yunchi Li, Jianjie Wang, Yongqin Li

Photoplethysmography (PPG) is widely used in wearable health monitors for tracking fundamental physiological parameters (e.g., heart rate and blood oxygen saturation) and advancing applications requiring high-quality signals-such as blood pressure assessment and cardiac arrhythmia detection. However, motion artifacts and environmental noise significantly degrade the accuracy of PPG-derived physiological measurements, potentially causing false alarms or delayed diagnoses in longitudinal monitoring cohorts. While signal quality assessment (SQA) provides an effective solution, existing methods show insufficient robustness in ambulatory scenarios. This study concentrates on PPG signal quality detection and proposes a robust SQA algorithm for wearable devices under unrestricted daily activities. PPG and acceleration signals were acquired from 54 participants using a self-made physiological monitoring headband during daily activities, segmented into 35712 non-overlapping 5-second epochs. Each epoch was annotated with: (1) PPG signal quality levels (good: 10817; moderate: 14788; poor: 10107), and (2) activity states classified as sedentary, light, moderate, or vigorous-intensity. The dataset was stratified into training (80%) and testing (20%) subsets to maintain proportional representation. Fourteen discriminative features were extracted from four domains: morphological characteristics, time-frequency distributions, physiological parameters estimation consistency and accuracy, and statistical properties of signal dynamics. Four machine learning algorithms were employed to train models for SQA. The random forest (95.6%) achieved the highest accuracy on the test set, but no significant differences (p = 0.471) compared to support vector machine (95.4%), naive Bayes (94.1%), and BP neural network (95.1%). Additionally, the classification accuracy showed no statistically significant variations (p = 0.648) across light (95.3%), moderate (96.0%), and vigorous activity (100%) when compared to sedentary (95.8%). All features exhibited significant differences (p < 0.05) across high/moderate/poor quality segments in all pairwise comparisons.The results indicate that the proposed feature set achieves robust SQA, maintaining consistently high classification accuracy across all activity intensities. This performance stability enables real-time implementation in wearable devices.

光容积脉搏波(PPG)广泛应用于可穿戴式健康监测仪,用于跟踪基本生理参数(如心率和血氧饱和度),并推进需要高质量信号的应用,如血压评估和心律失常检测。然而,运动伪影和环境噪声显著降低了ppg衍生生理测量的准确性,可能导致纵向监测队列中的误报或延迟诊断。虽然信号质量评估(SQA)提供了有效的解决方案,但现有方法在动态场景下的鲁棒性不足。本研究以PPG信号质量检测为核心,提出了一种鲁棒的SQA算法,适用于日常活动不受限制的可穿戴设备。使用自制生理监测头带采集54名参与者在日常活动中的PPG和加速度信号,将其分割为35712个不重叠的5秒周期。每个epoch注释:(1)PPG信号质量水平(良好:10817;中等:14788;差:10107),(2)活动状态分为平稳、轻度、中度或剧烈。数据集被分层为训练(80%)和测试(20%)子集,以保持比例表示。从形态学特征、时频分布、生理参数估计精度和信号动力学统计特性四个方面提取了14个判别特征。采用四种机器学习算法对SQA模型进行训练。随机森林(95.6%)在测试集上取得了最高的准确率,但与支持向量机(95.4%)、朴素贝叶斯(94.1%)和BP神经网络(95.1%)相比没有显著差异(p=0.471)。此外,与久坐(95.8%)相比,轻度(95.3%)、中度(97.6%)和剧烈运动(100%)的分类准确率没有统计学上的显著差异(p=0.648)。各特征差异均有统计学意义(p
{"title":"Assessing photoplethysmography signal quality for wearable devices during unrestricted daily activities.","authors":"Liang Wei, Yushun Gong, Yunchi Li, Jianjie Wang, Yongqin Li","doi":"10.1088/2057-1976/ae250f","DOIUrl":"10.1088/2057-1976/ae250f","url":null,"abstract":"<p><p>Photoplethysmography (PPG) is widely used in wearable health monitors for tracking fundamental physiological parameters (e.g., heart rate and blood oxygen saturation) and advancing applications requiring high-quality signals-such as blood pressure assessment and cardiac arrhythmia detection. However, motion artifacts and environmental noise significantly degrade the accuracy of PPG-derived physiological measurements, potentially causing false alarms or delayed diagnoses in longitudinal monitoring cohorts. While signal quality assessment (SQA) provides an effective solution, existing methods show insufficient robustness in ambulatory scenarios. This study concentrates on PPG signal quality detection and proposes a robust SQA algorithm for wearable devices under unrestricted daily activities. PPG and acceleration signals were acquired from 54 participants using a self-made physiological monitoring headband during daily activities, segmented into 35712 non-overlapping 5-second epochs. Each epoch was annotated with: (1) PPG signal quality levels (good: 10817; moderate: 14788; poor: 10107), and (2) activity states classified as sedentary, light, moderate, or vigorous-intensity. The dataset was stratified into training (80%) and testing (20%) subsets to maintain proportional representation. Fourteen discriminative features were extracted from four domains: morphological characteristics, time-frequency distributions, physiological parameters estimation consistency and accuracy, and statistical properties of signal dynamics. Four machine learning algorithms were employed to train models for SQA. The random forest (95.6%) achieved the highest accuracy on the test set, but no significant differences (<i>p</i> = 0.471) compared to support vector machine (95.4%), naive Bayes (94.1%), and BP neural network (95.1%). Additionally, the classification accuracy showed no statistically significant variations (<i>p</i> = 0.648) across light (95.3%), moderate (96.0%), and vigorous activity (100%) when compared to sedentary (95.8%). All features exhibited significant differences (p < 0.05) across high/moderate/poor quality segments in all pairwise comparisons.The results indicate that the proposed feature set achieves robust SQA, maintaining consistently high classification accuracy across all activity intensities. This performance stability enables real-time implementation in wearable devices.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2025-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145628773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MLGF-GAN: a multi-level local-global feature fusion GAN for OCT image super-resolution. MLGF-GAN:用于OCT图像超分辨率的多级局部-全局特征融合GAN。
IF 1.6 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING Pub Date : 2025-12-10 DOI: 10.1088/2057-1976/ae2623
Tingting Han, Wenxuan Li, Jixing Han, Jihao Lang, Wenxia Zhang, Wei Xia, Kuiyuan Tao, Wei Wang, Jing Gao, Dandan Qi

Optical coherence tomography (OCT), a non-invasive imaging modality, holds significant clinical value in cardiology and ophthalmology. However, its imaging quality is often constrained by inherently limited resolution, thereby affecting diagnostic utility. For OCT-based diagnosis, enhancing perceptual quality that emphasizes human visual recognition ability and diagnostic effectiveness is crucial. Existing super-resolution methods prioritize reconstruction accuracy (e.g., PSNR optimization) but neglect perceptual quality. To address this, we propose a Multi-level Local-Global feature Fusion Generative Adversarial Network (MLGF-GAN) that systematically integrates local details, global contextual information, and multilevel features to fully exploit the recoverable information in the image. The Local Feature Extractor (LFE) employs Coordinate Attention-enhanced convolutional neural network (CNN) for lesion-focused local feature refinement, and the Global Feature Extractor (GFE) employs shifted-window Transformers to model long-range dependencies. The Multi-level Feature Fusion Structure (MFFS) hierarchically aggregates image features and adaptively processes information at different scales. The multi-scale (×2, ×4, ×8) evaluations conducted on coronary and retinal OCT datasets demonstrate that the proposed model achieves highly competitive perceptual quality across all scales while maintaining reconstruction accuracy. The generated OCT super-resolution images exhibit superior texture detail restoration and spectral consistency, contributing to improved accuracy and reliability in clinical assessment. Furthermore, cross-pathology experiments further demonstrate that the proposed model possesses excellent generalization capability.

光学相干断层扫描(OCT)是一种非侵入性成像方式,在心脏病学和眼科具有重要的临床价值。然而,其成像质量往往受到固有的有限分辨率的限制,从而影响诊断效用。在基于oct的诊断中,提高强调人的视觉识别能力和诊断有效性的感知质量至关重要。现有的超分辨率方法优先考虑重建精度(如PSNR优化),但忽略了感知质量。为了解决这个问题,我们提出了一个多层次的局部-全局特征融合生成对抗网络(MLGF-GAN),该网络系统地集成了局部细节、全局上下文信息和多层次特征,以充分利用图像中的可恢复信息。局部特征提取器(LFE)采用坐标注意力增强卷积神经网络(CNN)对病灶进行局部特征细化,全局特征提取器(GFE)采用移窗变形器对远程依赖关系进行建模。多层特征融合结构(MFFS)对图像特征进行分层聚合,并对不同尺度的信息进行自适应处理。对冠状动脉和视网膜OCT数据集进行的多尺度(×2, ×4, ×8)评估表明,所提出的模型在保持重建准确性的同时,在所有尺度上都具有高度竞争力的感知质量。生成的OCT超分辨率图像具有优异的纹理细节恢复和光谱一致性,有助于提高临床评估的准确性和可靠性。此外,交叉病理实验进一步证明了该模型具有良好的泛化能力。
{"title":"MLGF-GAN: a multi-level local-global feature fusion GAN for OCT image super-resolution.","authors":"Tingting Han, Wenxuan Li, Jixing Han, Jihao Lang, Wenxia Zhang, Wei Xia, Kuiyuan Tao, Wei Wang, Jing Gao, Dandan Qi","doi":"10.1088/2057-1976/ae2623","DOIUrl":"10.1088/2057-1976/ae2623","url":null,"abstract":"<p><p>Optical coherence tomography (OCT), a non-invasive imaging modality, holds significant clinical value in cardiology and ophthalmology. However, its imaging quality is often constrained by inherently limited resolution, thereby affecting diagnostic utility. For OCT-based diagnosis, enhancing perceptual quality that emphasizes human visual recognition ability and diagnostic effectiveness is crucial. Existing super-resolution methods prioritize reconstruction accuracy (e.g., PSNR optimization) but neglect perceptual quality. To address this, we propose a Multi-level Local-Global feature Fusion Generative Adversarial Network (MLGF-GAN) that systematically integrates local details, global contextual information, and multilevel features to fully exploit the recoverable information in the image. The Local Feature Extractor (LFE) employs Coordinate Attention-enhanced convolutional neural network (CNN) for lesion-focused local feature refinement, and the Global Feature Extractor (GFE) employs shifted-window Transformers to model long-range dependencies. The Multi-level Feature Fusion Structure (MFFS) hierarchically aggregates image features and adaptively processes information at different scales. The multi-scale (×2, ×4, ×8) evaluations conducted on coronary and retinal OCT datasets demonstrate that the proposed model achieves highly competitive perceptual quality across all scales while maintaining reconstruction accuracy. The generated OCT super-resolution images exhibit superior texture detail restoration and spectral consistency, contributing to improved accuracy and reliability in clinical assessment. Furthermore, cross-pathology experiments further demonstrate that the proposed model possesses excellent generalization capability.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2025-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145652973","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Biomedical Physics & Engineering Express
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1