Pub Date : 2025-12-18DOI: 10.1088/2057-1976/ae291b
Hajar Mohamadzade Sani, Seyed Mostafa Hosseinalipour, Sarah Salehi, Koorosh Aieneh
Alginate microgels are attractive platforms for cell encapsulation, yet conventional gelation strategies often lead to heterogeneous crosslinking, unstable droplets, and reduced cell viability. Here, we present a paraffin oil-based flow-focusing microfluidic system that integratesin situandex situgelation to generate structurally homogeneous and monodisperse Ca-ALG microgels. Unlike conventional approaches that often suffer from unstable droplet formation or incomplete gelation, our method reliably produced uniform microgels with coefficients of variation consistently below 5% and maintained spherical morphology across a wide range of flow conditions. Scanning electron microscopy revealed a hierarchical porous architecture that supported nutrient and metabolite transport while providing structural stability. Encapsulated HEK-293 cells remained highly viable for more than two weeks, and spontaneous spheroid formation occurred within 24 h-an outcome rarely achieved in comparable systems and underscoring the functional relevance of this platform. Compared with existing microfluidic methods, this paraffin oil-driven dual gelation strategy offered superior reproducibility, droplet stability, and encapsulation efficiency. This study integrates and optimizes previously reported dual gelation strategies by employing paraffin oil in a flow-focusing device, establishing a simple, practical, and scalable solution to long-standing challenges in microgel-based encapsulation with strong potential to advance 3D culture, tissue engineering, and regenerative medicine.
{"title":"A simple yet effective microfluidic device for the<i>in-situ</i>formation of uniform-sized cell-laden microgels.","authors":"Hajar Mohamadzade Sani, Seyed Mostafa Hosseinalipour, Sarah Salehi, Koorosh Aieneh","doi":"10.1088/2057-1976/ae291b","DOIUrl":"https://doi.org/10.1088/2057-1976/ae291b","url":null,"abstract":"<p><p>Alginate microgels are attractive platforms for cell encapsulation, yet conventional gelation strategies often lead to heterogeneous crosslinking, unstable droplets, and reduced cell viability. Here, we present a paraffin oil-based flow-focusing microfluidic system that integrates<i>in situ</i>and<i>ex situ</i>gelation to generate structurally homogeneous and monodisperse Ca-ALG microgels. Unlike conventional approaches that often suffer from unstable droplet formation or incomplete gelation, our method reliably produced uniform microgels with coefficients of variation consistently below 5% and maintained spherical morphology across a wide range of flow conditions. Scanning electron microscopy revealed a hierarchical porous architecture that supported nutrient and metabolite transport while providing structural stability. Encapsulated HEK-293 cells remained highly viable for more than two weeks, and spontaneous spheroid formation occurred within 24 h-an outcome rarely achieved in comparable systems and underscoring the functional relevance of this platform. Compared with existing microfluidic methods, this paraffin oil-driven dual gelation strategy offered superior reproducibility, droplet stability, and encapsulation efficiency. This study integrates and optimizes previously reported dual gelation strategies by employing paraffin oil in a flow-focusing device, establishing a simple, practical, and scalable solution to long-standing challenges in microgel-based encapsulation with strong potential to advance 3D culture, tissue engineering, and regenerative medicine.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":"12 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2025-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145780130","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-18DOI: 10.1088/2057-1976/ae183a
Siti Mahfuzah Fauzi, Latifah Munirah Kamarudin, Tiu Ting Yii
Impulse-radio ultra-wideband (IR-UWB) radar technology employs short-duration impulse waves with broad bandwidth for precise detection and tracking, offering a cost-effective, non-invasive alternative for portable heart rate monitoring. Its practical design supports long-term healthcare applications without adverse effects. However, effective implementation necessitates robust signal processing techniques to minimize interference from clutter signals and breathing harmonics, enabling the extraction of the target signal from background noise and interference. This study aims to provide real-time measurements through the implementation of signal processing algorithms such as Fast Fourier Transform (FFT), autocorrelation, and peak finding with a moving average filter (MAF) to extract heartbeat signals from background noise and interference. Algorithms were tuned for range parameters and bandpass filter order, with a Kaiser window-based FIR filter (order 250) selected for testing. The FFT algorithm achieved the highest accuracy of 85.6%, while peak finding with MAF and autocorrelation attained accuracies of 78.5% and 76.6%, respectively. The FFT algorithm demonstrated superior potential for real-time heart rate monitoring and was implemented in a graphical user interface (GUI) for data visualization.
{"title":"Real-time wireless signal processing for contactless heart rate monitoring with impulse-radio ultra-wideband radar technology.","authors":"Siti Mahfuzah Fauzi, Latifah Munirah Kamarudin, Tiu Ting Yii","doi":"10.1088/2057-1976/ae183a","DOIUrl":"https://doi.org/10.1088/2057-1976/ae183a","url":null,"abstract":"<p><p>Impulse-radio ultra-wideband (IR-UWB) radar technology employs short-duration impulse waves with broad bandwidth for precise detection and tracking, offering a cost-effective, non-invasive alternative for portable heart rate monitoring. Its practical design supports long-term healthcare applications without adverse effects. However, effective implementation necessitates robust signal processing techniques to minimize interference from clutter signals and breathing harmonics, enabling the extraction of the target signal from background noise and interference. This study aims to provide real-time measurements through the implementation of signal processing algorithms such as Fast Fourier Transform (FFT), autocorrelation, and peak finding with a moving average filter (MAF) to extract heartbeat signals from background noise and interference. Algorithms were tuned for range parameters and bandpass filter order, with a Kaiser window-based FIR filter (order 250) selected for testing. The FFT algorithm achieved the highest accuracy of 85.6%, while peak finding with MAF and autocorrelation attained accuracies of 78.5% and 76.6%, respectively. The FFT algorithm demonstrated superior potential for real-time heart rate monitoring and was implemented in a graphical user interface (GUI) for data visualization.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":"12 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2025-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145780054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-18DOI: 10.1088/2057-1976/ae291c
Hui Xiong, Shuaiqi Chang, Jinzhen Liu
Objective. To enhance the decoding accuracy and information transfer rate of steady-state visual evoked potential-based brain-computer interface (SSVEP-BCI) systems and to reduce inter-subject variability for broader SSVEP-BCI applications, a dual-channel TRCA-net (DC-TRCA-net) method is proposed, based on cross-subject positive transfer. The proposed method incorporates an innovative Transfer-Accuracy-based Subject Selection (T-ASS) strategy and a deep learning network integrated with the SSVEP Domain Adaptation Network (SSVEP-DAN) to enhance SSVEP-BCI decoding performance. The T-ASS strategy constructs contribution scores by computing each subject's self-accuracy and transfer accuracy, and enables effective source subject selection while mitigating negative transfer risks. DC-TRCA-net is further developed to improve model generalization through cross-subject data augmentation. The effectiveness of the proposed method is validated on two large-scale public benchmark datasets. Experimental results demonstrate that DC-TRCA-net outperforms existing networks across both datasets, with particularly substantial performance gains observed in complex experimental scenarios.
{"title":"Dual-channel TRCA-net based on cross-subject positive transfer for SSVEP-BCI.","authors":"Hui Xiong, Shuaiqi Chang, Jinzhen Liu","doi":"10.1088/2057-1976/ae291c","DOIUrl":"10.1088/2057-1976/ae291c","url":null,"abstract":"<p><p><i>Objective</i>. To enhance the decoding accuracy and information transfer rate of steady-state visual evoked potential-based brain-computer interface (SSVEP-BCI) systems and to reduce inter-subject variability for broader SSVEP-BCI applications, a dual-channel TRCA-net (DC-TRCA-net) method is proposed, based on cross-subject positive transfer. The proposed method incorporates an innovative Transfer-Accuracy-based Subject Selection (T-ASS) strategy and a deep learning network integrated with the SSVEP Domain Adaptation Network (SSVEP-DAN) to enhance SSVEP-BCI decoding performance. The T-ASS strategy constructs contribution scores by computing each subject's self-accuracy and transfer accuracy, and enables effective source subject selection while mitigating negative transfer risks. DC-TRCA-net is further developed to improve model generalization through cross-subject data augmentation. The effectiveness of the proposed method is validated on two large-scale public benchmark datasets. Experimental results demonstrate that DC-TRCA-net outperforms existing networks across both datasets, with particularly substantial performance gains observed in complex experimental scenarios.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2025-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145707183","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-17DOI: 10.1088/2057-1976/ae2e01
Raida Hentati, Manel Hentati, Aymen Abid
The increasing prevalence of cardiovascular diseases (CVDs) calls for innovative diagnostic solutions that are both accurate and scalable. ElectroCardioGrams (ECGs) remain central to cardiac assessment: However, manual interpretation is time consuming and error-prone. To address this challenge, we propose a lightweight multimodal generative AI framework capable of automatically interpreting ECG images and producing structured clinical reports. The framework builds upon the SmolVLM-500M-Instruct model, fine-tuned via Quantized Low-Rank Adaptation (QLoRA) to enable efficient deployment on standard hardware. A custom multimodal ECG dataset ,comprising image report pairs curated from authoritative clinical sources and augmented to mitigate class imbalance, served as the foundation for training. The proposed architecture integrates a vision encoder, a cross-modal fusion mechanism, and a language decoder to effectively align visual ECG representations with diagnostic narratives. Experimental evaluations demonstrate significant improvements in BLEU, ROUGE-L, and BERTScore metrics through a two-phase fine-tuning strategy, highlighting the model's ability to generate clinically coherent and semantically rich reports. Overall, this work contributes a scalable, interpretable, and resource efficient AI framework for cardiac diagnostics, bridging the gap between state of the art deep learning research and real-world clinical practice.
{"title":"Two Stage Fine-Tuned Multimodal Generative AI for Automated ECG Based Cardiovascular Report Generation.","authors":"Raida Hentati, Manel Hentati, Aymen Abid","doi":"10.1088/2057-1976/ae2e01","DOIUrl":"https://doi.org/10.1088/2057-1976/ae2e01","url":null,"abstract":"<p><p>The increasing prevalence of cardiovascular diseases (CVDs) calls for innovative diagnostic solutions that are both accurate and scalable. ElectroCardioGrams (ECGs) remain central to cardiac assessment: However, manual interpretation is time consuming and error-prone. To address this challenge, we propose a lightweight multimodal generative AI framework capable of automatically interpreting ECG images and producing structured clinical reports. The framework builds upon the SmolVLM-500M-Instruct model, fine-tuned via Quantized Low-Rank Adaptation (QLoRA) to enable efficient deployment on standard hardware. A custom multimodal ECG dataset ,comprising image report pairs curated from authoritative clinical sources and augmented to mitigate class imbalance, served as the foundation for training. The proposed architecture integrates a vision encoder, a cross-modal fusion mechanism, and a language decoder to effectively align visual ECG representations with diagnostic narratives. Experimental evaluations demonstrate significant improvements in BLEU, ROUGE-L, and BERTScore metrics through a two-phase fine-tuning strategy, highlighting the model's ability to generate clinically coherent and semantically rich reports. Overall, this work contributes a scalable, interpretable, and resource efficient AI framework for cardiac diagnostics, bridging the gap between state of the art deep learning research and real-world clinical practice.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2025-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145773370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-17DOI: 10.1088/2057-1976/ae2622
Maria Jose Medrano, Xinyuan Chen, Lucas Norberto Burigo, Joseph A O'Sullivan, Jeffrey F Williamson
Objective.We propose a novel method, basis vector model material indexing (BVM-MI), for predicting atomic composition and mass density from two independent basis vector model weights derived from dual-energy CT (DECT) for Monte Carlo (MC) dose planning.Approach. BVM-MI employs multiple linear regression on BVM weights and their quotient to predict elemental composition and mass density for 70 representative tissues. Predicted values were imported into the TOPAS MC code to simulate proton dose deposition to a uniform cylinder phantom composed of each tissue type. The performance of BVM-MI was compared to the conventional Hounsfield Unit material indexing method (HU-MI), which estimates elemental composition and density based on CT numbers (HU). Evaluation metrics included absolute errors in predicted elemental compositions and relative percent errors in calculated mass density and mean excitation energy. Dose distributions were assessed by quantifying absolute error in the depth of 80% maximum scored dose (R80) and relative percent errors in stopping power (SP) between MC simulations using HU-MI, BVM-MI, and benchmark compositions. Lateral dose profiles were analyzed at R80 and Bragg Peak (RBP) depths for three tissues showing the largest discrepancies in R80 depth.Main Results. BVM-MI outperformed HU-MI in elemental composition predictions, with mean root-mean-square error (RMSE) of 1.30% (soft tissue) and 0.1% (bony tissue), compared to 4.20% and 1.9% for HU-MI. R80 depth RMSEs were 0.2 mm (soft) and 0.1 mm (bony) for BVM-MI, versus 1.8 mm and 0.7 mm for HU-MI. Lateral dose profile analysis showed overall smaller dose errors for BVM-MI across core, halo, and proximal aura regions.Significance. Fully utilizing the two-parameter BVM space for material indexing significantly improved TOPAS MC dose calculations by factors of 7 to 9 in RMSE compared to the conventional HU-MI method demonstrating the potential of BVM-MI to enhance proton therapy planning, particularly for tissues with substantial elemental variability.
{"title":"Derivation of tissue properties from basis-vector model weights for dual-energy CT-based Monte Carlo proton beam dose calculations.","authors":"Maria Jose Medrano, Xinyuan Chen, Lucas Norberto Burigo, Joseph A O'Sullivan, Jeffrey F Williamson","doi":"10.1088/2057-1976/ae2622","DOIUrl":"10.1088/2057-1976/ae2622","url":null,"abstract":"<p><p><i>Objective.</i>We propose a novel method, basis vector model material indexing (BVM-MI), for predicting atomic composition and mass density from two independent basis vector model weights derived from dual-energy CT (DECT) for Monte Carlo (MC) dose planning.<i>Approach</i>. BVM-MI employs multiple linear regression on BVM weights and their quotient to predict elemental composition and mass density for 70 representative tissues. Predicted values were imported into the TOPAS MC code to simulate proton dose deposition to a uniform cylinder phantom composed of each tissue type. The performance of BVM-MI was compared to the conventional Hounsfield Unit material indexing method (HU-MI), which estimates elemental composition and density based on CT numbers (HU). Evaluation metrics included absolute errors in predicted elemental compositions and relative percent errors in calculated mass density and mean excitation energy. Dose distributions were assessed by quantifying absolute error in the depth of 80% maximum scored dose (R80) and relative percent errors in stopping power (SP) between MC simulations using HU-MI, BVM-MI, and benchmark compositions. Lateral dose profiles were analyzed at R80 and Bragg Peak (RBP) depths for three tissues showing the largest discrepancies in R80 depth.<i>Main Results</i>. BVM-MI outperformed HU-MI in elemental composition predictions, with mean root-mean-square error (RMSE) of 1.30% (soft tissue) and 0.1% (bony tissue), compared to 4.20% and 1.9% for HU-MI. R80 depth RMSEs were 0.2 mm (soft) and 0.1 mm (bony) for BVM-MI, versus 1.8 mm and 0.7 mm for HU-MI. Lateral dose profile analysis showed overall smaller dose errors for BVM-MI across core, halo, and proximal aura regions.<i>Significance</i>. Fully utilizing the two-parameter BVM space for material indexing significantly improved TOPAS MC dose calculations by factors of 7 to 9 in RMSE compared to the conventional HU-MI method demonstrating the potential of BVM-MI to enhance proton therapy planning, particularly for tissues with substantial elemental variability.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2025-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145653066","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-17DOI: 10.1088/2057-1976/ae2689
Nausheen Ansari, Yusuf Khan, Omar Farooq
Millions of adults suffer from Major Depressive Disorder (MDD), globally. Applying network theory to study functional brain dynamics often use fMRI modality to identify the perturbed connectivity in depressed individuals. However, the weak temporal resolution of fMRI limits its ability to access the fast dynamics of functional connectivity (FC). Therefore, Electroencephalography (EEG), which can track functional brain dynamics every millisecond, may serve as a diagnostic marker to utilizing the dynamics of intrinsic brain networks at the sensor level. This research proposes a unique neural marker for depression detection by analyzing long-range functional neurodynamics between the default mode network (DMN) and visual network (VN) via optimal EEG nodes. While DMN abnormalities in depression are well documented, the interactions between the DMN and VN, which reflect visual imagery at rest, remain unclear. Subsequently, a novel differential graph centrality index is applied to reduce a high-dimensional feature space representing EEG temporal neurodynamics, which produced an optimized brain network for MDD detection. The proposed method achieves an exceptional classification performance with an average accuracy, f1 score, and MCC of 99.76%, 0.998, and 0.9995 for the MODMA and 99.99%, 0.999 and 0.9998 for the HUSM datasets, respectively. The findings of this study suggests that a significant decrease in connection density within the beta band (15-30 Hz) in depressed individuals exhibits disrupted long-range inter-network topology, which could serve as a reliable neural marker for depression detection and monitoring. Furthermore, weak FC links between the DMN and VN indicate disengagement between the DMN and VN, which signifies progressive cognitive decline, weak memory, and disrupted thinking at rest, often accompanied by MDD.
{"title":"An optimized EEG-based intrinsic brain network for depression detection using differential graph centrality.","authors":"Nausheen Ansari, Yusuf Khan, Omar Farooq","doi":"10.1088/2057-1976/ae2689","DOIUrl":"10.1088/2057-1976/ae2689","url":null,"abstract":"<p><p>Millions of adults suffer from Major Depressive Disorder (MDD), globally. Applying network theory to study functional brain dynamics often use fMRI modality to identify the perturbed connectivity in depressed individuals. However, the weak temporal resolution of fMRI limits its ability to access the fast dynamics of functional connectivity (FC). Therefore, Electroencephalography (EEG), which can track functional brain dynamics every millisecond, may serve as a diagnostic marker to utilizing the dynamics of intrinsic brain networks at the sensor level. This research proposes a unique neural marker for depression detection by analyzing long-range functional neurodynamics between the default mode network (DMN) and visual network (VN) via optimal EEG nodes. While DMN abnormalities in depression are well documented, the interactions between the DMN and VN, which reflect visual imagery at rest, remain unclear. Subsequently, a novel differential graph centrality index is applied to reduce a high-dimensional feature space representing EEG temporal neurodynamics, which produced an optimized brain network for MDD detection. The proposed method achieves an exceptional classification performance with an average accuracy, f1 score, and MCC of 99.76%, 0.998, and 0.9995 for the MODMA and 99.99%, 0.999 and 0.9998 for the HUSM datasets, respectively. The findings of this study suggests that a significant decrease in connection density within the beta band (15-30 Hz) in depressed individuals exhibits disrupted long-range inter-network topology, which could serve as a reliable neural marker for depression detection and monitoring. Furthermore, weak FC links between the DMN and VN indicate disengagement between the DMN and VN, which signifies progressive cognitive decline, weak memory, and disrupted thinking at rest, often accompanied by MDD.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2025-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145660172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-16DOI: 10.1088/2057-1976/ae27d5
E A Lorenz, X Su, N Skjæret-Maroni
Objective.While peripheral mechanisms of proprioception are well understood, the cortical processing of its feedback during dynamic and complex movements remains less clear. Corticokinematic coherence (CKC), which quantifies the coupling between limb movements and sensorimotor cortex activity, offers a way to investigate this cortical processing. However, ecologically valid CKC assessment poses technical challenges. Thus, by integrating Electroencephalography (EEG) with Human Pose Estimation (HPE), this study validates the feasibility and validity of a novel methodology for measuring CKC during upper-limb movements in real-world and virtual reality (VR) settings.Approach.Nine healthy adults performed repetitive finger-tapping (1 Hz) and reaching (0.5 Hz) tasks in real and VR settings. Their execution was recorded temporally synchronized using a 64-channel EEG, optical marker-based motion capture, and monocular deep-learning-based HPE via Mediapipe. Alongside the CKC, the kinematic agreement between both systems was assessed.Main results.CKC was detected using both marker-based and HPE-based kinematics across tasks and environments, with significant coherence observed in most participants. HPE-derived CKC closely matched marker-based measurements for most joints, exhibiting strong reliability and equivalent coherence magnitudes between real and VR conditions.Significance.This study validates a noninvasive and portable EEG-HPE approach for assessing cortical proprioceptive processing in ecologically valid settings, enabling broader clinical and rehabilitation applications.
{"title":"Evaluating corticokinematic coherence using electroencephalography and human pose estimation.","authors":"E A Lorenz, X Su, N Skjæret-Maroni","doi":"10.1088/2057-1976/ae27d5","DOIUrl":"10.1088/2057-1976/ae27d5","url":null,"abstract":"<p><p><i>Objective.</i>While peripheral mechanisms of proprioception are well understood, the cortical processing of its feedback during dynamic and complex movements remains less clear. Corticokinematic coherence (CKC), which quantifies the coupling between limb movements and sensorimotor cortex activity, offers a way to investigate this cortical processing. However, ecologically valid CKC assessment poses technical challenges. Thus, by integrating Electroencephalography (EEG) with Human Pose Estimation (HPE), this study validates the feasibility and validity of a novel methodology for measuring CKC during upper-limb movements in real-world and virtual reality (VR) settings.<i>Approach.</i>Nine healthy adults performed repetitive finger-tapping (1 Hz) and reaching (0.5 Hz) tasks in real and VR settings. Their execution was recorded temporally synchronized using a 64-channel EEG, optical marker-based motion capture, and monocular deep-learning-based HPE via Mediapipe. Alongside the CKC, the kinematic agreement between both systems was assessed.<i>Main results.</i>CKC was detected using both marker-based and HPE-based kinematics across tasks and environments, with significant coherence observed in most participants. HPE-derived CKC closely matched marker-based measurements for most joints, exhibiting strong reliability and equivalent coherence magnitudes between real and VR conditions.<i>Significance.</i>This study validates a noninvasive and portable EEG-HPE approach for assessing cortical proprioceptive processing in ecologically valid settings, enabling broader clinical and rehabilitation applications.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2025-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145676290","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-15DOI: 10.1088/2057-1976/ae2c8e
Leonard Brainaparte Kwee, Marlin Ramadhan Baidillah, Muhammad Nurul Puji, Winda Astuti
Accurate electrode placement is critical for improving image fidelity in lung Electrical Impedance Tomography (EIT), yet current systems rely on simplified circular templates that neglect patient-specific anatomical variation. This paper presents a novel, low-cost pipeline that uses smartphone-based photogrammetry to generate individualized 3D torso reconstructions for boundary-aligned electrode placement. The method includes automated video frame extraction, mesh post-processing, interactive 2D boundary extraction, real-world anatomical scaling, and both manual and automatic electrode detection. We evaluate two photogrammetry pipelines - commercial (RealityCapture) and open-source (Meshroom + MeshLab) - across five subjects including a mannequin and four human participants. Results demonstrate sub-centimeter Mean Absolute Error (MAE 0.42-0.60 cm) and Mean Percentage Error (MPE 8.56-11.51%) in electrode placement accuracy. Repeatability analysis shows good consistency with Coefficient of Variation below 15% for MPE and 19% for MAE. The generated subject-specific finite element meshes achieve 98.79% accuracy in cross-sectional area compared to direct measurements. While the current implementation requires 15-30 minutes processing time and multiple software tools, it establishes a foundation for more precise and personalized bioimpedance imaging that could benefit both clinical EIT and broader applications in neurological and industrial domains.
{"title":"2D Boundary Shape Detection Based on Camera for Enhanced Electrode Placement in Lung Electrical Impedance Tomography.","authors":"Leonard Brainaparte Kwee, Marlin Ramadhan Baidillah, Muhammad Nurul Puji, Winda Astuti","doi":"10.1088/2057-1976/ae2c8e","DOIUrl":"https://doi.org/10.1088/2057-1976/ae2c8e","url":null,"abstract":"<p><p>Accurate electrode placement is critical for improving image fidelity in lung Electrical Impedance Tomography (EIT), yet current systems rely on simplified circular templates that neglect patient-specific anatomical variation. This paper presents a novel, low-cost pipeline that uses smartphone-based photogrammetry to generate individualized 3D torso reconstructions for boundary-aligned electrode placement. The method includes automated video frame extraction, mesh post-processing, interactive 2D boundary extraction, real-world anatomical scaling, and both manual and automatic electrode detection. We evaluate two photogrammetry pipelines - commercial (RealityCapture) and open-source (Meshroom + MeshLab) - across five subjects including a mannequin and four human participants. Results demonstrate sub-centimeter Mean Absolute Error (MAE 0.42-0.60 cm) and Mean Percentage Error (MPE 8.56-11.51%) in electrode placement accuracy. Repeatability analysis shows good consistency with Coefficient of Variation below 15% for MPE and 19% for MAE. The generated subject-specific finite element meshes achieve 98.79% accuracy in cross-sectional area compared to direct measurements. While the current implementation requires 15-30 minutes processing time and multiple software tools, it establishes a foundation for more precise and personalized bioimpedance imaging that could benefit both clinical EIT and broader applications in neurological and industrial domains.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2025-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145761961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-15DOI: 10.1088/2057-1976/ae2772
Kaiwei Hu, Yong Wang, Kaixiang Tu, Hongxiang Guo, Jun Yan
The recognition of steady-state visual evoked potential (SSVEP) signals in brain-computer interface (BCI) systems is challenging due to the lack of training data and significant inter-subject variability. To address this, we propose a novel unsupervised transfer learning framework that enhances SSVEP recognition without requiring any subject-specific calibration. Our method employs a three-stage pipeline: (1) preprocessing with similarity-aware subject selection and Euclidean alignment to mitigate domain shifts; (2) hybrid feature extraction combining canonical correlation analysis (CCA) and task-related component analysis (TRCA) to enhance signal-to-noise ratio and phase sensitivity; and (3) weighted correlation fusion for robust classification. Extensive evaluations on the Benchmark and BETA datasets demonstrate that our approach achieves state-of-the-art performance, with average accuracies of 83.20% and 69.08% at 1 s data length, respectively-significantly outperforming existing methods like ttCCA and Ensemble-DNN. The highest information transfer rate reaches 157.53 bits min-1, underscoring the framework's practical potential for plug-and-play SSVEP-based BCIs.
{"title":"Cross-domain correlation analysis to improve SSVEP signals recognition in brain-computer interfaces.","authors":"Kaiwei Hu, Yong Wang, Kaixiang Tu, Hongxiang Guo, Jun Yan","doi":"10.1088/2057-1976/ae2772","DOIUrl":"10.1088/2057-1976/ae2772","url":null,"abstract":"<p><p>The recognition of steady-state visual evoked potential (SSVEP) signals in brain-computer interface (BCI) systems is challenging due to the lack of training data and significant inter-subject variability. To address this, we propose a novel unsupervised transfer learning framework that enhances SSVEP recognition without requiring any subject-specific calibration. Our method employs a three-stage pipeline: (1) preprocessing with similarity-aware subject selection and Euclidean alignment to mitigate domain shifts; (2) hybrid feature extraction combining canonical correlation analysis (CCA) and task-related component analysis (TRCA) to enhance signal-to-noise ratio and phase sensitivity; and (3) weighted correlation fusion for robust classification. Extensive evaluations on the Benchmark and BETA datasets demonstrate that our approach achieves state-of-the-art performance, with average accuracies of 83.20% and 69.08% at 1 s data length, respectively-significantly outperforming existing methods like ttCCA and Ensemble-DNN. The highest information transfer rate reaches 157.53 bits min<sup>-1</sup>, underscoring the framework's practical potential for plug-and-play SSVEP-based BCIs.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2025-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145666631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-11DOI: 10.1088/2057-1976/ae2621
Agnese Robustelli Test, Chandra Bortolotto, Sithin Thulasi Seetha, Alessandra Marrocco, Carlotta Pairazzi, Gaia Messana, Leonardo Brizzi, Domenico Zacà, Robert Grimm, Francesca Brero, Manuel Mariani, Raffaella Fiamma Cabini, Giulia Maria Stella, Lorenzo Preda, Alessandro Lascialfari
Objective.Lung cancer remains the leading cause of cancer-related mortality worldwide, with Non-Small Cell Lung Cancer (NSCLC) accounting for approximately 85% of all cases. Programmed cell Death Ligand-1 (PD-L1) is a well-established biomarker that guides immunotherapy in advanced-stage NSCLC, currently evaluated via invasive biopsy procedures. This study aims to develop and validate a non-invasive pipeline for stratifying PD-L1 expression using quantitative analysis of IVIM parameter maps-diffusion (D), pseudo-diffusion (D*), perfusion fraction (pf)-and T1-VIBE MRI acquisitions.Approach.MRI data from 43 NSCLC patients were analysed and labelled as PD-L1 positive (≥1%) or negative (<1%) based on immunohistochemistry exam. After pre-processing, 1,171 radiomic features and 512 deep learning features were obtained. Three feature sets (radiomic, deep learning, and fusion) were tested with Logistic Regression, Random Forest, and XGBoost. Four discriminative features were selected using the Mann-Whitney U-test, and model performance was primarily assessed using the area under the receiver operating characteristic curve (AUC). Robustness was ensured through repeated stratified 5-fold cross-validation, bootstrap-derived confidence intervals, and permutation test.Main Results.Logistic Regression generally demonstrated the highest classification performance, with AUC values ranging from 0.78 to 0.92 across all feature sets. Fusion models outperformed or matched the performance of the best standalone radiomics or deep learning model. Among multisequence MRI, the IVIM-D fusion features yielded the best performance with an AUC of 0.92, followed by IVIM-D* radiomic features that showed a similar AUC of 0.91. For IVIM-pf and T1-VIBE derived features, the fusion model yielded the best AUC values of 0.87 and 0.90, respectively.Significance.The obtained results highlight the potential of a combined radiomic-deep learning approach to effectively detect PD-L1 expression from MRI acquisitions, paving the way for a non-invasive PD-L1 evaluation procedure.
{"title":"Multisequence MRI-driven assessment of PD-L1 expression in non-small cell lung cancer: a pilot study.","authors":"Agnese Robustelli Test, Chandra Bortolotto, Sithin Thulasi Seetha, Alessandra Marrocco, Carlotta Pairazzi, Gaia Messana, Leonardo Brizzi, Domenico Zacà, Robert Grimm, Francesca Brero, Manuel Mariani, Raffaella Fiamma Cabini, Giulia Maria Stella, Lorenzo Preda, Alessandro Lascialfari","doi":"10.1088/2057-1976/ae2621","DOIUrl":"https://doi.org/10.1088/2057-1976/ae2621","url":null,"abstract":"<p><p><i>Objective.</i>Lung cancer remains the leading cause of cancer-related mortality worldwide, with Non-Small Cell Lung Cancer (NSCLC) accounting for approximately 85% of all cases. Programmed cell Death Ligand-1 (PD-L1) is a well-established biomarker that guides immunotherapy in advanced-stage NSCLC, currently evaluated via invasive biopsy procedures. This study aims to develop and validate a non-invasive pipeline for stratifying PD-L1 expression using quantitative analysis of IVIM parameter maps-diffusion (D), pseudo-diffusion (D*), perfusion fraction (pf)-and T1-VIBE MRI acquisitions.<i>Approach.</i>MRI data from 43 NSCLC patients were analysed and labelled as PD-L1 positive (≥1%) or negative (<1%) based on immunohistochemistry exam. After pre-processing, 1,171 radiomic features and 512 deep learning features were obtained. Three feature sets (radiomic, deep learning, and fusion) were tested with Logistic Regression, Random Forest, and XGBoost. Four discriminative features were selected using the Mann-Whitney U-test, and model performance was primarily assessed using the area under the receiver operating characteristic curve (AUC). Robustness was ensured through repeated stratified 5-fold cross-validation, bootstrap-derived confidence intervals, and permutation test.<i>Main Results.</i>Logistic Regression generally demonstrated the highest classification performance, with AUC values ranging from 0.78 to 0.92 across all feature sets. Fusion models outperformed or matched the performance of the best standalone radiomics or deep learning model. Among multisequence MRI, the IVIM-D fusion features yielded the best performance with an AUC of 0.92, followed by IVIM-D* radiomic features that showed a similar AUC of 0.91. For IVIM-pf and T1-VIBE derived features, the fusion model yielded the best AUC values of 0.87 and 0.90, respectively.<i>Significance.</i>The obtained results highlight the potential of a combined radiomic-deep learning approach to effectively detect PD-L1 expression from MRI acquisitions, paving the way for a non-invasive PD-L1 evaluation procedure.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":"12 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2025-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145721043","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}