Pub Date : 2026-01-21DOI: 10.1109/OJEMB.2026.3656806
{"title":"2025 Index IEEE Open Journal of Engineering in Medicine and Biology Vol. 6","authors":"","doi":"10.1109/OJEMB.2026.3656806","DOIUrl":"https://doi.org/10.1109/OJEMB.2026.3656806","url":null,"abstract":"","PeriodicalId":33825,"journal":{"name":"IEEE Open Journal of Engineering in Medicine and Biology","volume":"6 ","pages":"605-627"},"PeriodicalIF":2.9,"publicationDate":"2026-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11360594","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146026363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Objective: This study investigates the neurodynamics of motor imagery speed decoding using deep learning. Methods: The EEGConformer model was employed to analyze EEG signals and decode different speeds of imagined movements. Explainable artificial intelligence techniques were used to identify the temporal and spatial patterns within the EEG data related to imagined speeds, focusing on the role of specific frequency bands and cortical regions. Results: The model successfully decoded and extracted EEG patterns associated with different motor imagery speeds; however, the classification accuracy was limited and high only for a few participants. The analysis highlighted the importance of alpha and beta oscillations and identified key cortical areas, including the frontal, motor, and occipital cortices, in speed decoding. Additionally, repeated motor imagery elicited steady-state movement-related potentials at the fundamental frequency, with the strongest responses observed at the second harmonic. Conclusions: Motor imagery speed is decodable, though classification performance remains limited. The results highlight the involvement of specific frequency bands and cortical regions, as well as steady-state responses, in encoding MI speed.
{"title":"Deep Learning-Based Decoding and Feature Visualization of Motor Imagery Speeds From EEG Signals","authors":"Shogo Todoroki;Chatrin Phunruangsakao;Keisuke Goto;Kyo Kutsuzawa;Dai Owaki;Mitsuhiro Hayashibe","doi":"10.1109/OJEMB.2025.3645617","DOIUrl":"https://doi.org/10.1109/OJEMB.2025.3645617","url":null,"abstract":"<italic>Objective:</i> This study investigates the neurodynamics of motor imagery speed decoding using deep learning. <italic>Methods:</i> The EEGConformer model was employed to analyze EEG signals and decode different speeds of imagined movements. Explainable artificial intelligence techniques were used to identify the temporal and spatial patterns within the EEG data related to imagined speeds, focusing on the role of specific frequency bands and cortical regions. <italic>Results:</i> The model successfully decoded and extracted EEG patterns associated with different motor imagery speeds; however, the classification accuracy was limited and high only for a few participants. The analysis highlighted the importance of alpha and beta oscillations and identified key cortical areas, including the frontal, motor, and occipital cortices, in speed decoding. Additionally, repeated motor imagery elicited steady-state movement-related potentials at the fundamental frequency, with the strongest responses observed at the second harmonic. <italic>Conclusions:</i> Motor imagery speed is decodable, though classification performance remains limited. The results highlight the involvement of specific frequency bands and cortical regions, as well as steady-state responses, in encoding MI speed.","PeriodicalId":33825,"journal":{"name":"IEEE Open Journal of Engineering in Medicine and Biology","volume":"7 ","pages":"27-34"},"PeriodicalIF":2.9,"publicationDate":"2025-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11303869","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145982165","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Goal: Blastomeres and pronuclei detection plays a crucial role in advancing research on embryo development and assisted reproductive technologies. However, due to the frequent overlapping of blastomeres and the pronuclei's small size, background similarity, and unclear boundaries, their localization proves to be extremely difficult. Methods: To address these challenges, we propose YOLO-VML, an improved detection model based on the YOLOv10 framework. The model integrates the visual state space (VSS) module of VMamba into the backbone network to enhance the global receptive field and enable broader feature capture. A multi-branch weighted feature pyramid network (MBFPN) is introduced as the neck structure to improve the preservation and fusion of features, especially those related to small targets. Additionally, a lightweight shared convolutional detection head (LSCD) is employed to reduce parameters and computational overhead while maintaining detection accuracy. Results: The proposed YOLO-VML model demonstrates excellent performance in detecting both blastomeres and pronuclei. It achieves a mean average precision (mAP@0.5) of 93.2% for pronuclei detection and 92.3% for blastomere detection beyond the 4-cell stage. Conclusions: YOLO-VML effectively addresses the difficulties in blastomere and pronucleus localization by enhancing feature representation and detection efficiency. Its high accuracy and efficiency make it a valuable tool for advancing embryo research and assisted reproductive technology applications.
{"title":"YOLO-VML: An Improved Object Detection Model for Blastomeres and Pronuclei Localization in IoMT","authors":"Aiyun Shen;Chang Li;Jingwei Yang;Guoning Huang;Xiaodong Zhang","doi":"10.1109/OJEMB.2025.3644699","DOIUrl":"https://doi.org/10.1109/OJEMB.2025.3644699","url":null,"abstract":"<italic>Goal:</i> Blastomeres and pronuclei detection plays a crucial role in advancing research on embryo development and assisted reproductive technologies. However, due to the frequent overlapping of blastomeres and the pronuclei's small size, background similarity, and unclear boundaries, their localization proves to be extremely difficult. <italic>Methods:</i> To address these challenges, we propose YOLO-VML, an improved detection model based on the YOLOv10 framework. The model integrates the visual state space (VSS) module of VMamba into the backbone network to enhance the global receptive field and enable broader feature capture. A multi-branch weighted feature pyramid network (MBFPN) is introduced as the neck structure to improve the preservation and fusion of features, especially those related to small targets. Additionally, a lightweight shared convolutional detection head (LSCD) is employed to reduce parameters and computational overhead while maintaining detection accuracy. <italic>Results:</i> The proposed YOLO-VML model demonstrates excellent performance in detecting both blastomeres and pronuclei. It achieves a mean average precision (mAP@0.5) of 93.2% for pronuclei detection and 92.3% for blastomere detection beyond the 4-cell stage. <italic>Conclusions:</i> YOLO-VML effectively addresses the difficulties in blastomere and pronucleus localization by enhancing feature representation and detection efficiency. Its high accuracy and efficiency make it a valuable tool for advancing embryo research and assisted reproductive technology applications.","PeriodicalId":33825,"journal":{"name":"IEEE Open Journal of Engineering in Medicine and Biology","volume":"7 ","pages":"35-42"},"PeriodicalIF":2.9,"publicationDate":"2025-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11300956","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145982309","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-08DOI: 10.1109/OJEMB.2025.3641824
Robert D. Butterfield;Nathaniel M. Sims
Goal: Assess performance and potential use of a novel, servo-controlled, gravity-driven infusion device with FDA regulatory clearance obtained 3/1/2024(K242693). Introduction: "SAFEflowTM" (SF) using real time flow measurement and feedback control, has been cleared by USFDA. We hypothesized that due to its architecture using video imaging there will be both benefits, and functional contrasts with the behavior of legacy infusion pumps (LIPs). Methods: We conducted type-tests of critical metrics using AAMI and IEC standards together with computational simulations. Results were compared with claimed and measured performance of two widely-used LIPs. Results/Discussion: Across its rated flow range of 1-600 ml h−1, SF’s measured flow performance was superior (95/95% confidence/reliability −4.3% to +4.5% mean flow rate accuracy) to claims of two LIP designs, Occlusion detection was more consistent and rapid across flow rates and requiring less user interaction. Conclusion: SF’s infusion performance was superior with reduced weight, size, and low parts count compared to LIPs. Regulatory evaluation standards may require updating for this class of infusion device.
目标:评估2024年3月1日获得FDA监管许可的新型伺服控制重力驱动输液器的性能和潜在用途(K242693)。“SAFEflowTM”(SF)采用实时流量测量和反馈控制,已获得美国fda的批准。我们假设,由于其使用视频成像的结构,将有好处,并与传统输液泵(lip)的行为进行功能对比。方法:采用AAMI和IEC标准对关键指标进行了类型测试,并进行了计算模拟。结果与两种广泛使用的lip的声称和测量性能进行了比较。结果/讨论:在1-600 ml h - 1的额定流量范围内,SF的测量流量性能优于两种LIP设计,(95% /95%置信度/可靠性- 4.3%至+4.5%的平均流量精度),遮挡检测在不同流量下更加一致和快速,需要更少的用户交互。结论:与lip相比,SF的输注性能更优,其重量、体积更小,部位数更少。这类输液器的监管评估标准可能需要更新。
{"title":"Performance Evaluation of a Novel Digital Flow-Imaging IV Infusion Device","authors":"Robert D. Butterfield;Nathaniel M. Sims","doi":"10.1109/OJEMB.2025.3641824","DOIUrl":"https://doi.org/10.1109/OJEMB.2025.3641824","url":null,"abstract":"<italic>Goal</i>: Assess performance and potential use of a novel, servo-controlled, gravity-driven infusion device with FDA regulatory clearance obtained 3/1/2024(K242693). <italic>Introduction:</i> \"<italic><u>S</u>AFE<u>f</u>low<sup>TM</sup></i>\" (SF) using real time flow measurement and feedback control, has been cleared by USFDA. We hypothesized that due to its architecture using video imaging there will be both benefits, and functional contrasts with the behavior of <underline>l</u>egacy <underline>i</u>nfusion <underline>p</u>umps (LIPs). <italic>Methods:</i> We conducted type-tests of critical metrics using AAMI and IEC standards together with computational simulations. Results were compared with claimed and measured performance of two widely-used LIPs. <italic>Results/Discussion:</i> Across its rated flow range of 1-600 ml h<sup>−1</sup>, SF’s measured <italic>flow</i> performance was superior (95/95% confidence/reliability −4.3% to +4.5% mean flow rate accuracy) to claims of two LIP designs, Occlusion detection was more consistent and rapid across flow rates and requiring less user interaction. <italic>Conclusion:</i> SF’s infusion performance was superior with reduced weight, size, and low parts count compared to LIPs. Regulatory evaluation standards may require updating for this class of infusion device.","PeriodicalId":33825,"journal":{"name":"IEEE Open Journal of Engineering in Medicine and Biology","volume":"7 ","pages":"43-46"},"PeriodicalIF":2.9,"publicationDate":"2025-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11284858","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146026520","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01DOI: 10.1109/OJEMB.2025.3639174
Sicong Huang;Roozbeh Jafari;Bobak J. Mortazavi
Goal: Continuous arterial blood pressure (ABP) waveform is invasive but essential for hemodynamic monitoring. Current non-invasive techniques reconstruct ABP waveforms with pulsatile signals but derived inaccurate systolic and diastolic blood pressure (SBP/DBP) and were sensitive to individual variability. Methods: ArterialNet integrates generalized pulsatile-to-ABP signal translation and personalized feature extraction using hybrid loss functions and regularizations. Results: ArterialNet achieved a root mean square error (RMSE) of 5.41 ± 1.35 mmHg on MIMIC-III, achieving 58% lower standard deviation than existing signal translation techniques. ArterialNet also reconstructed ABP with RMSE of 7.99 ± 1.91 mmHg in remote health scenario. Conclusion: ArterialNet achieved superior performance in ABP reconstruction and SBP/DBP estimations with significantly reduced subject variance, demonstrating its potential in remote health settings. We also ablated ArterialNet's architecture to investigate contributions of each component and evaluated ArterialNet's translational impact and robustness by conducting a series of ablations on data quality and availability.
{"title":"ArterialNet: Reconstructing Arterial Blood Pressure Waveform With Wearable Pulsatile Signals, a Cohort-Aware Approach","authors":"Sicong Huang;Roozbeh Jafari;Bobak J. Mortazavi","doi":"10.1109/OJEMB.2025.3639174","DOIUrl":"https://doi.org/10.1109/OJEMB.2025.3639174","url":null,"abstract":"<italic>Goal</i>: Continuous arterial blood pressure (ABP) waveform is invasive but essential for hemodynamic monitoring. Current non-invasive techniques reconstruct ABP waveforms with pulsatile signals but derived inaccurate systolic and diastolic blood pressure (SBP/DBP) and were sensitive to individual variability. <italic>Methods:</i> ArterialNet integrates generalized pulsatile-to-ABP signal translation and personalized feature extraction using hybrid loss functions and regularizations. <italic>Results:</i> ArterialNet achieved a root mean square error (RMSE) of 5.41 ± 1.35 mmHg on MIMIC-III, achieving 58% lower standard deviation than existing signal translation techniques. ArterialNet also reconstructed ABP with RMSE of 7.99 ± 1.91 mmHg in remote health scenario. <italic>Conclusion:</i> ArterialNet achieved superior performance in ABP reconstruction and SBP/DBP estimations with significantly reduced subject variance, demonstrating its potential in remote health settings. We also ablated ArterialNet's architecture to investigate contributions of each component and evaluated ArterialNet's translational impact and robustness by conducting a series of ablations on data quality and availability.","PeriodicalId":33825,"journal":{"name":"IEEE Open Journal of Engineering in Medicine and Biology","volume":"7 ","pages":"14-19"},"PeriodicalIF":2.9,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11271643","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145778255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-14DOI: 10.1109/OJEMB.2025.3633051
Bianca Reichard;Mirco Fuchs;Kerstin Bode
Goal: We introduce a continuous, multimodal pain classification technique that utilizes camera-based data conducted in clinical settings. Methods: We integrate facial Action Units (AUs) obtained from samples with sequential vital parameters extracted from video data, and systematically validate the practicality of measuring heart rate variability (HRV) from video-derived photoplethysmographic signals against traditional sensor-based electrocardiogram measurements. Video-based AUs and HRV metrics acquired from ultra-short-term processing are combined into an automated, contactless, multimodal algorithm for binary pain classification. Utilizing logistic regression alongside leave-one-out cross-validation, this approach is developed and validated using the BioVid Heat Pain Database and subsequently tested with our surgical Individual Patient Data. Results: We achieve an F1-score of 53% on the BioVid Heat Pain Database and 48% on our Individual Patient Data with ultra-short-term processing. Conclusion: Our approach provides a robust foundation for future multimodal pain classification utilizing vital signs and mimic parameters from 5.5 s camera recordings.
{"title":"Continuous, Contactless, and Multimodal Pain Assessment During Surgical Interventions","authors":"Bianca Reichard;Mirco Fuchs;Kerstin Bode","doi":"10.1109/OJEMB.2025.3633051","DOIUrl":"https://doi.org/10.1109/OJEMB.2025.3633051","url":null,"abstract":"<italic>Goal</i>: We introduce a continuous, multimodal pain classification technique that utilizes camera-based data conducted in clinical settings. <italic>Methods</i>: We integrate facial Action Units (AUs) obtained from samples with sequential vital parameters extracted from video data, and systematically validate the practicality of measuring heart rate variability (HRV) from video-derived photoplethysmographic signals against traditional sensor-based electrocardiogram measurements. Video-based AUs and HRV metrics acquired from ultra-short-term processing are combined into an automated, contactless, multimodal algorithm for binary pain classification. Utilizing logistic regression alongside leave-one-out cross-validation, this approach is developed and validated using the BioVid Heat Pain Database and subsequently tested with our surgical Individual Patient Data. <italic>Results</i>: We achieve an F1-score of 53% on the BioVid Heat Pain Database and 48% on our Individual Patient Data with ultra-short-term processing. <italic>Conclusion</i>: Our approach provides a robust foundation for future multimodal pain classification utilizing vital signs and mimic parameters from 5.5 s camera recordings.","PeriodicalId":33825,"journal":{"name":"IEEE Open Journal of Engineering in Medicine and Biology","volume":"7 ","pages":"1-6"},"PeriodicalIF":2.9,"publicationDate":"2025-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11249745","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145674785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-10DOI: 10.1109/OJEMB.2025.3630901
Jingting Yao;Shidong Xu;Isabela G. G. Choi;Otavio Henrique Pinhata-Baptista;Jerome L. Ackerman
Objective: Oral implant procedures necessitate assessment of alveolar bone, a vital tooth-supporting structure. While micro-computed tomography (micro-CT) is the gold standard for bone volume fraction assessment for its high spatial resolution and bone/soft tissue contrast, its substantial radiation exposure limits its use to specimens or small animals. This study evaluates the accuracy of 1.5T magnetic resonance imaging (MRI) in determining bone volume fraction, a surrogate of bone density, using micro-CT as the reference. Methods: Twenty-one alveolar bone biopsy specimens, which had undergone cone beam CT, micro-CT, and 14T MRI in a previous study, were subjected to 1.5T MRI. Results: The comparison between bone volume fraction measured by 1.5T MRI and micro-CT demonstrated a statistically significant correlation (r = 0.70, p < 0.0001). Consistency in results was investigated through repeated scans and repeated scanning and analyses. Conclusion: 1.5T MRI may be an effective, radiation-free tool for alveolar bone volume fraction assessment.
目的:口腔种植手术需要评估牙槽骨,一个重要的牙齿支撑结构。虽然微计算机断层扫描(micro-CT)因其高空间分辨率和骨/软组织对比度而成为骨体积分数评估的金标准,但其大量的辐射暴露限制了其在标本或小动物中的应用。本研究以micro-CT为参照,评估1.5T磁共振成像(MRI)测定骨密度替代指标骨体积分数的准确性。方法:对21例既往行锥形束CT、micro-CT、14T MRI检查的牙槽骨活检标本进行1.5T MRI检查。结果:1.5T MRI测量的骨体积分数与micro-CT测量的骨体积分数比较具有统计学意义(r = 0.70, p < 0.0001)。通过反复扫描和反复扫描和分析来调查结果的一致性。结论:1.5T MRI可能是一种有效的、无辐射的牙槽骨体积分数评估工具。
{"title":"Assessing Alveolar Bone Volume Fraction in Dental Implantology Using 1.5 Tesla Magnetic Resonance Imaging: An Ex Vivo Cross-Sectional Study","authors":"Jingting Yao;Shidong Xu;Isabela G. G. Choi;Otavio Henrique Pinhata-Baptista;Jerome L. Ackerman","doi":"10.1109/OJEMB.2025.3630901","DOIUrl":"https://doi.org/10.1109/OJEMB.2025.3630901","url":null,"abstract":"<bold>Objective:</b> Oral implant procedures necessitate assessment of alveolar bone, a vital tooth-supporting structure. While micro-computed tomography (micro-CT) is the gold standard for bone volume fraction assessment for its high spatial resolution and bone/soft tissue contrast, its substantial radiation exposure limits its use to specimens or small animals. This study evaluates the accuracy of 1.5T magnetic resonance imaging (MRI) in determining bone volume fraction, a surrogate of bone density, using micro-CT as the reference. <bold>Methods:</b> Twenty-one alveolar bone biopsy specimens, which had undergone cone beam CT, micro-CT, and 14T MRI in a previous study, were subjected to 1.5T MRI. <bold>Results:</b> The comparison between bone volume fraction measured by 1.5T MRI and micro-CT demonstrated a statistically significant correlation (r = 0.70, p < 0.0001). Consistency in results was investigated through repeated scans and repeated scanning and analyses. <bold>Conclusion:</b> 1.5T MRI may be an effective, radiation-free tool for alveolar bone volume fraction assessment.","PeriodicalId":33825,"journal":{"name":"IEEE Open Journal of Engineering in Medicine and Biology","volume":"7 ","pages":"7-13"},"PeriodicalIF":2.9,"publicationDate":"2025-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11236089","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145729450","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-23DOI: 10.1109/OJEMB.2025.3624566
Jenn-Kaie Lain;Chung-An Wang;Jun-Hao Xu;Chen-Wei Lee
Goal: This study presents an enhanced stacked U-Net deep learning model for cuffless blood pressure estimation using only photoplethysmogram signals, aiming to improve the accuracy of non-invasive measurements. Methods: To address the challenges of systolic blood pressure estimation, the model incorporates velocity plethysmogram input and employs additive spatial and channel attention mechanisms. These enhancements improve feature extraction and mitigate decoder mismatches in the U-Net architecture. Results: The model satisfies the Grade A criteria established by the British Hypertension Society and meets the accuracy standards of the Association for the Advancement of Medical Instrumentation, achieving mean absolute errors of 3.921 mmHg for systolic and 2.441 mmHg for diastolic blood pressure. It outperforms PPG-only spectro-temporal methods and achieves comparable performance to the joint photoplethysmogram and electrocardiogram one-dimensional squeeze-and-excitation network with long short-term memory architecture. Conclusions: The proposed model shows strong potential as a practical, low-cost, and non-invasive solution for continuous, cuffless blood pressure monitoring.
{"title":"Development of an Improved Stacked U-Net Model for Cuffless Blood Pressure Estimation Based on PPG Signals","authors":"Jenn-Kaie Lain;Chung-An Wang;Jun-Hao Xu;Chen-Wei Lee","doi":"10.1109/OJEMB.2025.3624566","DOIUrl":"https://doi.org/10.1109/OJEMB.2025.3624566","url":null,"abstract":"<italic>Goal:</i> This study presents an enhanced stacked U-Net deep learning model for cuffless blood pressure estimation using only photoplethysmogram signals, aiming to improve the accuracy of non-invasive measurements. <italic>Methods:</i> To address the challenges of systolic blood pressure estimation, the model incorporates velocity plethysmogram input and employs additive spatial and channel attention mechanisms. These enhancements improve feature extraction and mitigate decoder mismatches in the U-Net architecture. <italic>Results:</i> The model satisfies the Grade A criteria established by the British Hypertension Society and meets the accuracy standards of the Association for the Advancement of Medical Instrumentation, achieving mean absolute errors of 3.921 mmHg for systolic and 2.441 mmHg for diastolic blood pressure. It outperforms PPG-only spectro-temporal methods and achieves comparable performance to the joint photoplethysmogram and electrocardiogram one-dimensional squeeze-and-excitation network with long short-term memory architecture. <italic>Conclusions:</i> The proposed model shows strong potential as a practical, low-cost, and non-invasive solution for continuous, cuffless blood pressure monitoring.","PeriodicalId":33825,"journal":{"name":"IEEE Open Journal of Engineering in Medicine and Biology","volume":"6 ","pages":"584-590"},"PeriodicalIF":2.9,"publicationDate":"2025-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11215636","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145455918","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-23DOI: 10.1109/OJEMB.2025.3624582
Chien-Yu Chiou;Wei-Li Chen;Chun-Rong Huang;Yang C. Fann;Lawrence L. Latour;Pau-Choo Chung
Goal: Pathology images collected from different hospitals often have large appearance variability causedby different scanners, patients, or hospital protocols. Deep learning-based pathology segmentation models are highly dependent on the distribution of training data. Therefore, the models often suffer from the domain shift problem when applied to new target domains of different hospitals. Methods: To address this issue, we propose a hierarchical cross-consistency (HCC) network to hierarchically adapt models across pathology images of various domains with three consistency-based modules, the consistency module, the pair module, and the mixture module. The consistency module enhances the prediction consistency of each target image under various perturbations. The pair module improves consistency among different target images. Finally, the mixture module enhances the consistency across different domains. Results: The experimental results on pathology image datasets scanned using three different scanners show the superiority of the proposed HCC network compared to state-of-the-art unsupervised domain adaptation methods. Conclusions: The proposed method can successfully adapt trained pathology image segmentation models to new target domains, which is useful when introducing the models to different hospitals.
{"title":"Hierarchical Cross-Consistency Network Based Unsupervised Domain Adaptation for Pathology Whole Slide Image Segmentation","authors":"Chien-Yu Chiou;Wei-Li Chen;Chun-Rong Huang;Yang C. Fann;Lawrence L. Latour;Pau-Choo Chung","doi":"10.1109/OJEMB.2025.3624582","DOIUrl":"https://doi.org/10.1109/OJEMB.2025.3624582","url":null,"abstract":"<italic>Goal:</i> Pathology images collected from different hospitals often have large appearance variability causedby different scanners, patients, or hospital protocols. Deep learning-based pathology segmentation models are highly dependent on the distribution of training data. Therefore, the models often suffer from the domain shift problem when applied to new target domains of different hospitals. <italic>Methods:</i> To address this issue, we propose a hierarchical cross-consistency (HCC) network to hierarchically adapt models across pathology images of various domains with three consistency-based modules, the consistency module, the pair module, and the mixture module. The consistency module enhances the prediction consistency of each target image under various perturbations. The pair module improves consistency among different target images. Finally, the mixture module enhances the consistency across different domains. <italic>Results:</i> The experimental results on pathology image datasets scanned using three different scanners show the superiority of the proposed HCC network compared to state-of-the-art unsupervised domain adaptation methods. <italic>Conclusions:</i> The proposed method can successfully adapt trained pathology image segmentation models to new target domains, which is useful when introducing the models to different hospitals.","PeriodicalId":33825,"journal":{"name":"IEEE Open Journal of Engineering in Medicine and Biology","volume":"6 ","pages":"598-604"},"PeriodicalIF":2.9,"publicationDate":"2025-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11215652","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145560627","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-23DOI: 10.1109/OJEMB.2025.3624591
L. Feld;S. Hellmers;L. Schell-Majoor;J. Koschate-Storm;T. Zieschang;A. Hein;B. Kollmeier
Objective: Older adults face a heightened fall risk, which can severely impact their health. Individual responses to unexpected gait perturbations (e.g., slips) are potential predictors of this risk. This study examines automatic detection of treadmill-generated gait perturbations using acceleration and angular velocity from everyday wearables. Detection is achieved using a deep convolutional long short-term memory (DeepConvLSTM) algorithm. Results: An F1 score of at least 0.68 and recall of 0.86 was retrieved for all data, i.e., data from hearing aids, smartphones at various positions and professional sensors at lumbar and sternum. Performance did not significantly change when combining data from different sensor positions or using only acceleration data. Conclusion: Results suggest that hearing aids and smartphones can monitor gait perturbations with similar performance as professional equipment, highlighting the potential of everyday wearables for continuous fall risk monitoring.
{"title":"Automatic Detection of Gait Perturbations With Everyday Wearable Technology","authors":"L. Feld;S. Hellmers;L. Schell-Majoor;J. Koschate-Storm;T. Zieschang;A. Hein;B. Kollmeier","doi":"10.1109/OJEMB.2025.3624591","DOIUrl":"10.1109/OJEMB.2025.3624591","url":null,"abstract":"<italic>Objective:</i> Older adults face a heightened fall risk, which can severely impact their health. Individual responses to unexpected gait perturbations (e.g., slips) are potential predictors of this risk. This study examines automatic detection of treadmill-generated gait perturbations using acceleration and angular velocity from everyday wearables. Detection is achieved using a deep convolutional long short-term memory (DeepConvLSTM) algorithm. <italic>Results:</i> An F1 score of at least 0.68 and recall of 0.86 was retrieved for all data, i.e., data from hearing aids, smartphones at various positions and professional sensors at lumbar and sternum. Performance did not significantly change when combining data from different sensor positions or using only acceleration data. <italic>Conclusion:</i> Results suggest that hearing aids and smartphones can monitor gait perturbations with similar performance as professional equipment, highlighting the potential of everyday wearables for continuous fall risk monitoring.","PeriodicalId":33825,"journal":{"name":"IEEE Open Journal of Engineering in Medicine and Biology","volume":"6 ","pages":"570-575"},"PeriodicalIF":2.9,"publicationDate":"2025-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12599889/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145497023","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}