首页 > 最新文献

IEEE Open Journal of Engineering in Medicine and Biology最新文献

英文 中文
2025 Index IEEE Open Journal of Engineering in Medicine and Biology Vol. 6 IEEE开放医学与生物工程杂志第6卷
IF 2.9 Q3 ENGINEERING, BIOMEDICAL Pub Date : 2026-01-21 DOI: 10.1109/OJEMB.2026.3656806
{"title":"2025 Index IEEE Open Journal of Engineering in Medicine and Biology Vol. 6","authors":"","doi":"10.1109/OJEMB.2026.3656806","DOIUrl":"https://doi.org/10.1109/OJEMB.2026.3656806","url":null,"abstract":"","PeriodicalId":33825,"journal":{"name":"IEEE Open Journal of Engineering in Medicine and Biology","volume":"6 ","pages":"605-627"},"PeriodicalIF":2.9,"publicationDate":"2026-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11360594","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146026363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Learning-Based Decoding and Feature Visualization of Motor Imagery Speeds From EEG Signals 基于深度学习的脑电信号运动图像速度解码与特征可视化
IF 2.9 Q3 ENGINEERING, BIOMEDICAL Pub Date : 2025-12-18 DOI: 10.1109/OJEMB.2025.3645617
Shogo Todoroki;Chatrin Phunruangsakao;Keisuke Goto;Kyo Kutsuzawa;Dai Owaki;Mitsuhiro Hayashibe
Objective: This study investigates the neurodynamics of motor imagery speed decoding using deep learning. Methods: The EEGConformer model was employed to analyze EEG signals and decode different speeds of imagined movements. Explainable artificial intelligence techniques were used to identify the temporal and spatial patterns within the EEG data related to imagined speeds, focusing on the role of specific frequency bands and cortical regions. Results: The model successfully decoded and extracted EEG patterns associated with different motor imagery speeds; however, the classification accuracy was limited and high only for a few participants. The analysis highlighted the importance of alpha and beta oscillations and identified key cortical areas, including the frontal, motor, and occipital cortices, in speed decoding. Additionally, repeated motor imagery elicited steady-state movement-related potentials at the fundamental frequency, with the strongest responses observed at the second harmonic. Conclusions: Motor imagery speed is decodable, though classification performance remains limited. The results highlight the involvement of specific frequency bands and cortical regions, as well as steady-state responses, in encoding MI speed.
目的:利用深度学习研究运动意象速度解码的神经动力学。方法:采用EEGConformer模型对脑电信号进行分析,解码不同速度的想象动作。可解释的人工智能技术用于识别与想象速度相关的脑电图数据中的时空模式,重点关注特定频带和皮层区域的作用。结果:该模型成功解码并提取了不同运动意象速度下的脑电模式;然而,分类精度有限,只有少数参与者的分类精度较高。分析强调了α和β振荡的重要性,并确定了在速度解码中关键的皮质区域,包括额叶皮质、运动皮质和枕叶皮质。此外,重复的运动意象在基频处引发稳态运动相关电位,在二次谐波处观察到最强烈的反应。结论:运动图像速度是可解码的,尽管分类性能仍然有限。结果强调了特定频带和皮层区域的参与,以及稳态反应,编码MI速度。
{"title":"Deep Learning-Based Decoding and Feature Visualization of Motor Imagery Speeds From EEG Signals","authors":"Shogo Todoroki;Chatrin Phunruangsakao;Keisuke Goto;Kyo Kutsuzawa;Dai Owaki;Mitsuhiro Hayashibe","doi":"10.1109/OJEMB.2025.3645617","DOIUrl":"https://doi.org/10.1109/OJEMB.2025.3645617","url":null,"abstract":"<italic>Objective:</i> This study investigates the neurodynamics of motor imagery speed decoding using deep learning. <italic>Methods:</i> The EEGConformer model was employed to analyze EEG signals and decode different speeds of imagined movements. Explainable artificial intelligence techniques were used to identify the temporal and spatial patterns within the EEG data related to imagined speeds, focusing on the role of specific frequency bands and cortical regions. <italic>Results:</i> The model successfully decoded and extracted EEG patterns associated with different motor imagery speeds; however, the classification accuracy was limited and high only for a few participants. The analysis highlighted the importance of alpha and beta oscillations and identified key cortical areas, including the frontal, motor, and occipital cortices, in speed decoding. Additionally, repeated motor imagery elicited steady-state movement-related potentials at the fundamental frequency, with the strongest responses observed at the second harmonic. <italic>Conclusions:</i> Motor imagery speed is decodable, though classification performance remains limited. The results highlight the involvement of specific frequency bands and cortical regions, as well as steady-state responses, in encoding MI speed.","PeriodicalId":33825,"journal":{"name":"IEEE Open Journal of Engineering in Medicine and Biology","volume":"7 ","pages":"27-34"},"PeriodicalIF":2.9,"publicationDate":"2025-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11303869","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145982165","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
YOLO-VML: An Improved Object Detection Model for Blastomeres and Pronuclei Localization in IoMT 一种改进的IoMT中卵裂球和原核定位的目标检测模型
IF 2.9 Q3 ENGINEERING, BIOMEDICAL Pub Date : 2025-12-15 DOI: 10.1109/OJEMB.2025.3644699
Aiyun Shen;Chang Li;Jingwei Yang;Guoning Huang;Xiaodong Zhang
Goal: Blastomeres and pronuclei detection plays a crucial role in advancing research on embryo development and assisted reproductive technologies. However, due to the frequent overlapping of blastomeres and the pronuclei's small size, background similarity, and unclear boundaries, their localization proves to be extremely difficult. Methods: To address these challenges, we propose YOLO-VML, an improved detection model based on the YOLOv10 framework. The model integrates the visual state space (VSS) module of VMamba into the backbone network to enhance the global receptive field and enable broader feature capture. A multi-branch weighted feature pyramid network (MBFPN) is introduced as the neck structure to improve the preservation and fusion of features, especially those related to small targets. Additionally, a lightweight shared convolutional detection head (LSCD) is employed to reduce parameters and computational overhead while maintaining detection accuracy. Results: The proposed YOLO-VML model demonstrates excellent performance in detecting both blastomeres and pronuclei. It achieves a mean average precision (mAP@0.5) of 93.2% for pronuclei detection and 92.3% for blastomere detection beyond the 4-cell stage. Conclusions: YOLO-VML effectively addresses the difficulties in blastomere and pronucleus localization by enhancing feature representation and detection efficiency. Its high accuracy and efficiency make it a valuable tool for advancing embryo research and assisted reproductive technology applications.
目的:卵裂球和原核的检测对推进胚胎发育和辅助生殖技术的研究具有重要意义。然而,由于卵裂球重叠频繁,原核体积小,背景相似,边界不清,因此定位极为困难。为了解决这些问题,我们提出了基于YOLOv10框架的改进检测模型YOLO-VML。该模型将vamba的视觉状态空间(VSS)模块集成到骨干网中,增强了全局接受野,实现了更广泛的特征捕获。引入多分支加权特征金字塔网络(MBFPN)作为颈部结构,提高了特征的保存和融合,特别是与小目标相关的特征。此外,采用轻量级共享卷积检测头(LSCD)来减少参数和计算开销,同时保持检测精度。结果:所建立的YOLO-VML模型对卵裂球和原核的检测都有很好的效果。原核检测的平均精密度(mAP@0.5)为93.2%,4细胞期以上的卵裂球检测的平均精密度为92.3%。结论:YOLO-VML通过增强特征表示和检测效率,有效解决了卵裂球和原核定位的难点。它的高精度和高效性使其成为推进胚胎研究和辅助生殖技术应用的宝贵工具。
{"title":"YOLO-VML: An Improved Object Detection Model for Blastomeres and Pronuclei Localization in IoMT","authors":"Aiyun Shen;Chang Li;Jingwei Yang;Guoning Huang;Xiaodong Zhang","doi":"10.1109/OJEMB.2025.3644699","DOIUrl":"https://doi.org/10.1109/OJEMB.2025.3644699","url":null,"abstract":"<italic>Goal:</i> Blastomeres and pronuclei detection plays a crucial role in advancing research on embryo development and assisted reproductive technologies. However, due to the frequent overlapping of blastomeres and the pronuclei's small size, background similarity, and unclear boundaries, their localization proves to be extremely difficult. <italic>Methods:</i> To address these challenges, we propose YOLO-VML, an improved detection model based on the YOLOv10 framework. The model integrates the visual state space (VSS) module of VMamba into the backbone network to enhance the global receptive field and enable broader feature capture. A multi-branch weighted feature pyramid network (MBFPN) is introduced as the neck structure to improve the preservation and fusion of features, especially those related to small targets. Additionally, a lightweight shared convolutional detection head (LSCD) is employed to reduce parameters and computational overhead while maintaining detection accuracy. <italic>Results:</i> The proposed YOLO-VML model demonstrates excellent performance in detecting both blastomeres and pronuclei. It achieves a mean average precision (mAP@0.5) of 93.2% for pronuclei detection and 92.3% for blastomere detection beyond the 4-cell stage. <italic>Conclusions:</i> YOLO-VML effectively addresses the difficulties in blastomere and pronucleus localization by enhancing feature representation and detection efficiency. Its high accuracy and efficiency make it a valuable tool for advancing embryo research and assisted reproductive technology applications.","PeriodicalId":33825,"journal":{"name":"IEEE Open Journal of Engineering in Medicine and Biology","volume":"7 ","pages":"35-42"},"PeriodicalIF":2.9,"publicationDate":"2025-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11300956","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145982309","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Performance Evaluation of a Novel Digital Flow-Imaging IV Infusion Device 一种新型数字血流成像静脉输液装置的性能评价
IF 2.9 Q3 ENGINEERING, BIOMEDICAL Pub Date : 2025-12-08 DOI: 10.1109/OJEMB.2025.3641824
Robert D. Butterfield;Nathaniel M. Sims
Goal: Assess performance and potential use of a novel, servo-controlled, gravity-driven infusion device with FDA regulatory clearance obtained 3/1/2024(K242693). Introduction: "SAFEflowTM" (SF) using real time flow measurement and feedback control, has been cleared by USFDA. We hypothesized that due to its architecture using video imaging there will be both benefits, and functional contrasts with the behavior of legacy infusion pumps (LIPs). Methods: We conducted type-tests of critical metrics using AAMI and IEC standards together with computational simulations. Results were compared with claimed and measured performance of two widely-used LIPs. Results/Discussion: Across its rated flow range of 1-600 ml h−1, SF’s measured flow performance was superior (95/95% confidence/reliability −4.3% to +4.5% mean flow rate accuracy) to claims of two LIP designs, Occlusion detection was more consistent and rapid across flow rates and requiring less user interaction. Conclusion: SF’s infusion performance was superior with reduced weight, size, and low parts count compared to LIPs. Regulatory evaluation standards may require updating for this class of infusion device.
目标:评估2024年3月1日获得FDA监管许可的新型伺服控制重力驱动输液器的性能和潜在用途(K242693)。“SAFEflowTM”(SF)采用实时流量测量和反馈控制,已获得美国fda的批准。我们假设,由于其使用视频成像的结构,将有好处,并与传统输液泵(lip)的行为进行功能对比。方法:采用AAMI和IEC标准对关键指标进行了类型测试,并进行了计算模拟。结果与两种广泛使用的lip的声称和测量性能进行了比较。结果/讨论:在1-600 ml h - 1的额定流量范围内,SF的测量流量性能优于两种LIP设计,(95% /95%置信度/可靠性- 4.3%至+4.5%的平均流量精度),遮挡检测在不同流量下更加一致和快速,需要更少的用户交互。结论:与lip相比,SF的输注性能更优,其重量、体积更小,部位数更少。这类输液器的监管评估标准可能需要更新。
{"title":"Performance Evaluation of a Novel Digital Flow-Imaging IV Infusion Device","authors":"Robert D. Butterfield;Nathaniel M. Sims","doi":"10.1109/OJEMB.2025.3641824","DOIUrl":"https://doi.org/10.1109/OJEMB.2025.3641824","url":null,"abstract":"<italic>Goal</i>: Assess performance and potential use of a novel, servo-controlled, gravity-driven infusion device with FDA regulatory clearance obtained 3/1/2024(K242693). <italic>Introduction:</i> \"<italic><u>S</u>AFE<u>f</u>low<sup>TM</sup></i>\" (SF) using real time flow measurement and feedback control, has been cleared by USFDA. We hypothesized that due to its architecture using video imaging there will be both benefits, and functional contrasts with the behavior of <underline>l</u>egacy <underline>i</u>nfusion <underline>p</u>umps (LIPs). <italic>Methods:</i> We conducted type-tests of critical metrics using AAMI and IEC standards together with computational simulations. Results were compared with claimed and measured performance of two widely-used LIPs. <italic>Results/Discussion:</i> Across its rated flow range of 1-600 ml h<sup>−1</sup>, SF’s measured <italic>flow</i> performance was superior (95/95% confidence/reliability −4.3% to +4.5% mean flow rate accuracy) to claims of two LIP designs, Occlusion detection was more consistent and rapid across flow rates and requiring less user interaction. <italic>Conclusion:</i> SF’s infusion performance was superior with reduced weight, size, and low parts count compared to LIPs. Regulatory evaluation standards may require updating for this class of infusion device.","PeriodicalId":33825,"journal":{"name":"IEEE Open Journal of Engineering in Medicine and Biology","volume":"7 ","pages":"43-46"},"PeriodicalIF":2.9,"publicationDate":"2025-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11284858","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146026520","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ArterialNet: Reconstructing Arterial Blood Pressure Waveform With Wearable Pulsatile Signals, a Cohort-Aware Approach ArterialNet:用可穿戴脉冲信号重建动脉血压波形,一种队列感知方法
IF 2.9 Q3 ENGINEERING, BIOMEDICAL Pub Date : 2025-12-01 DOI: 10.1109/OJEMB.2025.3639174
Sicong Huang;Roozbeh Jafari;Bobak J. Mortazavi
Goal: Continuous arterial blood pressure (ABP) waveform is invasive but essential for hemodynamic monitoring. Current non-invasive techniques reconstruct ABP waveforms with pulsatile signals but derived inaccurate systolic and diastolic blood pressure (SBP/DBP) and were sensitive to individual variability. Methods: ArterialNet integrates generalized pulsatile-to-ABP signal translation and personalized feature extraction using hybrid loss functions and regularizations. Results: ArterialNet achieved a root mean square error (RMSE) of 5.41 ± 1.35 mmHg on MIMIC-III, achieving 58% lower standard deviation than existing signal translation techniques. ArterialNet also reconstructed ABP with RMSE of 7.99 ± 1.91 mmHg in remote health scenario. Conclusion: ArterialNet achieved superior performance in ABP reconstruction and SBP/DBP estimations with significantly reduced subject variance, demonstrating its potential in remote health settings. We also ablated ArterialNet's architecture to investigate contributions of each component and evaluated ArterialNet's translational impact and robustness by conducting a series of ablations on data quality and availability.
目的:连续动脉血压(ABP)波形是有创的,但对血流动力学监测至关重要。目前的无创技术用脉搏信号重建ABP波形,但得到的收缩压和舒张压(SBP/DBP)不准确,而且对个体差异很敏感。方法:ArterialNet采用混合损失函数和正则化方法,将广义脉冲到abp信号转换和个性化特征提取相结合。结果:ArterialNet在MIMIC-III上的均方根误差(RMSE)为5.41±1.35 mmHg,比现有信号翻译技术的标准差低58%。ArterialNet还重建了远程健康场景下的ABP, RMSE为7.99±1.91 mmHg。结论:ArterialNet在ABP重建和收缩压/舒张压估计方面取得了优异的表现,显著降低了受试者方差,显示了其在远程医疗环境中的潜力。我们还简化了ArterialNet的架构,以调查每个组件的贡献,并通过对数据质量和可用性进行一系列简化,评估了ArterialNet的翻译影响和稳健性。
{"title":"ArterialNet: Reconstructing Arterial Blood Pressure Waveform With Wearable Pulsatile Signals, a Cohort-Aware Approach","authors":"Sicong Huang;Roozbeh Jafari;Bobak J. Mortazavi","doi":"10.1109/OJEMB.2025.3639174","DOIUrl":"https://doi.org/10.1109/OJEMB.2025.3639174","url":null,"abstract":"<italic>Goal</i>: Continuous arterial blood pressure (ABP) waveform is invasive but essential for hemodynamic monitoring. Current non-invasive techniques reconstruct ABP waveforms with pulsatile signals but derived inaccurate systolic and diastolic blood pressure (SBP/DBP) and were sensitive to individual variability. <italic>Methods:</i> ArterialNet integrates generalized pulsatile-to-ABP signal translation and personalized feature extraction using hybrid loss functions and regularizations. <italic>Results:</i> ArterialNet achieved a root mean square error (RMSE) of 5.41 ± 1.35 mmHg on MIMIC-III, achieving 58% lower standard deviation than existing signal translation techniques. ArterialNet also reconstructed ABP with RMSE of 7.99 ± 1.91 mmHg in remote health scenario. <italic>Conclusion:</i> ArterialNet achieved superior performance in ABP reconstruction and SBP/DBP estimations with significantly reduced subject variance, demonstrating its potential in remote health settings. We also ablated ArterialNet's architecture to investigate contributions of each component and evaluated ArterialNet's translational impact and robustness by conducting a series of ablations on data quality and availability.","PeriodicalId":33825,"journal":{"name":"IEEE Open Journal of Engineering in Medicine and Biology","volume":"7 ","pages":"14-19"},"PeriodicalIF":2.9,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11271643","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145778255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Continuous, Contactless, and Multimodal Pain Assessment During Surgical Interventions 手术干预期间的连续、非接触和多模态疼痛评估
IF 2.9 Q3 ENGINEERING, BIOMEDICAL Pub Date : 2025-11-14 DOI: 10.1109/OJEMB.2025.3633051
Bianca Reichard;Mirco Fuchs;Kerstin Bode
Goal: We introduce a continuous, multimodal pain classification technique that utilizes camera-based data conducted in clinical settings. Methods: We integrate facial Action Units (AUs) obtained from samples with sequential vital parameters extracted from video data, and systematically validate the practicality of measuring heart rate variability (HRV) from video-derived photoplethysmographic signals against traditional sensor-based electrocardiogram measurements. Video-based AUs and HRV metrics acquired from ultra-short-term processing are combined into an automated, contactless, multimodal algorithm for binary pain classification. Utilizing logistic regression alongside leave-one-out cross-validation, this approach is developed and validated using the BioVid Heat Pain Database and subsequently tested with our surgical Individual Patient Data. Results: We achieve an F1-score of 53% on the BioVid Heat Pain Database and 48% on our Individual Patient Data with ultra-short-term processing. Conclusion: Our approach provides a robust foundation for future multimodal pain classification utilizing vital signs and mimic parameters from 5.5 s camera recordings.
目的:我们介绍了一种连续的、多模态的疼痛分类技术,该技术利用基于相机的数据在临床环境中进行。方法:我们将从样本中获得的面部动作单元(AUs)与从视频数据中提取的顺序生命参数相结合,并系统地验证了从视频衍生的光容积脉搏波信号中测量心率变异性(HRV)与传统的基于传感器的心电图测量的实用性。基于视频的AUs和从超短期处理中获得的HRV指标被结合到一个自动的、非接触式的、多模态的二元疼痛分类算法中。利用逻辑回归和留一交叉验证,该方法使用BioVid热痛数据库开发和验证,随后使用我们的手术个体患者数据进行测试。结果:通过超短期处理,我们在BioVid热痛数据库中获得了53%的f1分,在我们的个人患者数据中获得了48%的f1分。结论:我们的方法为未来使用5.5 s摄像机记录的生命体征和模拟参数进行多模态疼痛分类提供了坚实的基础。
{"title":"Continuous, Contactless, and Multimodal Pain Assessment During Surgical Interventions","authors":"Bianca Reichard;Mirco Fuchs;Kerstin Bode","doi":"10.1109/OJEMB.2025.3633051","DOIUrl":"https://doi.org/10.1109/OJEMB.2025.3633051","url":null,"abstract":"<italic>Goal</i>: We introduce a continuous, multimodal pain classification technique that utilizes camera-based data conducted in clinical settings. <italic>Methods</i>: We integrate facial Action Units (AUs) obtained from samples with sequential vital parameters extracted from video data, and systematically validate the practicality of measuring heart rate variability (HRV) from video-derived photoplethysmographic signals against traditional sensor-based electrocardiogram measurements. Video-based AUs and HRV metrics acquired from ultra-short-term processing are combined into an automated, contactless, multimodal algorithm for binary pain classification. Utilizing logistic regression alongside leave-one-out cross-validation, this approach is developed and validated using the BioVid Heat Pain Database and subsequently tested with our surgical Individual Patient Data. <italic>Results</i>: We achieve an F1-score of 53% on the BioVid Heat Pain Database and 48% on our Individual Patient Data with ultra-short-term processing. <italic>Conclusion</i>: Our approach provides a robust foundation for future multimodal pain classification utilizing vital signs and mimic parameters from 5.5 s camera recordings.","PeriodicalId":33825,"journal":{"name":"IEEE Open Journal of Engineering in Medicine and Biology","volume":"7 ","pages":"1-6"},"PeriodicalIF":2.9,"publicationDate":"2025-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11249745","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145674785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Assessing Alveolar Bone Volume Fraction in Dental Implantology Using 1.5 Tesla Magnetic Resonance Imaging: An Ex Vivo Cross-Sectional Study 利用1.5特斯拉磁共振成像评估牙种植牙槽骨体积分数:一项离体横断面研究
IF 2.9 Q3 ENGINEERING, BIOMEDICAL Pub Date : 2025-11-10 DOI: 10.1109/OJEMB.2025.3630901
Jingting Yao;Shidong Xu;Isabela G. G. Choi;Otavio Henrique Pinhata-Baptista;Jerome L. Ackerman
Objective: Oral implant procedures necessitate assessment of alveolar bone, a vital tooth-supporting structure. While micro-computed tomography (micro-CT) is the gold standard for bone volume fraction assessment for its high spatial resolution and bone/soft tissue contrast, its substantial radiation exposure limits its use to specimens or small animals. This study evaluates the accuracy of 1.5T magnetic resonance imaging (MRI) in determining bone volume fraction, a surrogate of bone density, using micro-CT as the reference. Methods: Twenty-one alveolar bone biopsy specimens, which had undergone cone beam CT, micro-CT, and 14T MRI in a previous study, were subjected to 1.5T MRI. Results: The comparison between bone volume fraction measured by 1.5T MRI and micro-CT demonstrated a statistically significant correlation (r = 0.70, p < 0.0001). Consistency in results was investigated through repeated scans and repeated scanning and analyses. Conclusion: 1.5T MRI may be an effective, radiation-free tool for alveolar bone volume fraction assessment.
目的:口腔种植手术需要评估牙槽骨,一个重要的牙齿支撑结构。虽然微计算机断层扫描(micro-CT)因其高空间分辨率和骨/软组织对比度而成为骨体积分数评估的金标准,但其大量的辐射暴露限制了其在标本或小动物中的应用。本研究以micro-CT为参照,评估1.5T磁共振成像(MRI)测定骨密度替代指标骨体积分数的准确性。方法:对21例既往行锥形束CT、micro-CT、14T MRI检查的牙槽骨活检标本进行1.5T MRI检查。结果:1.5T MRI测量的骨体积分数与micro-CT测量的骨体积分数比较具有统计学意义(r = 0.70, p < 0.0001)。通过反复扫描和反复扫描和分析来调查结果的一致性。结论:1.5T MRI可能是一种有效的、无辐射的牙槽骨体积分数评估工具。
{"title":"Assessing Alveolar Bone Volume Fraction in Dental Implantology Using 1.5 Tesla Magnetic Resonance Imaging: An Ex Vivo Cross-Sectional Study","authors":"Jingting Yao;Shidong Xu;Isabela G. G. Choi;Otavio Henrique Pinhata-Baptista;Jerome L. Ackerman","doi":"10.1109/OJEMB.2025.3630901","DOIUrl":"https://doi.org/10.1109/OJEMB.2025.3630901","url":null,"abstract":"<bold>Objective:</b> Oral implant procedures necessitate assessment of alveolar bone, a vital tooth-supporting structure. While micro-computed tomography (micro-CT) is the gold standard for bone volume fraction assessment for its high spatial resolution and bone/soft tissue contrast, its substantial radiation exposure limits its use to specimens or small animals. This study evaluates the accuracy of 1.5T magnetic resonance imaging (MRI) in determining bone volume fraction, a surrogate of bone density, using micro-CT as the reference. <bold>Methods:</b> Twenty-one alveolar bone biopsy specimens, which had undergone cone beam CT, micro-CT, and 14T MRI in a previous study, were subjected to 1.5T MRI. <bold>Results:</b> The comparison between bone volume fraction measured by 1.5T MRI and micro-CT demonstrated a statistically significant correlation (r = 0.70, p < 0.0001). Consistency in results was investigated through repeated scans and repeated scanning and analyses. <bold>Conclusion:</b> 1.5T MRI may be an effective, radiation-free tool for alveolar bone volume fraction assessment.","PeriodicalId":33825,"journal":{"name":"IEEE Open Journal of Engineering in Medicine and Biology","volume":"7 ","pages":"7-13"},"PeriodicalIF":2.9,"publicationDate":"2025-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11236089","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145729450","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Development of an Improved Stacked U-Net Model for Cuffless Blood Pressure Estimation Based on PPG Signals 基于PPG信号的无袖带血压估计改进堆叠U-Net模型的开发
IF 2.9 Q3 ENGINEERING, BIOMEDICAL Pub Date : 2025-10-23 DOI: 10.1109/OJEMB.2025.3624566
Jenn-Kaie Lain;Chung-An Wang;Jun-Hao Xu;Chen-Wei Lee
Goal: This study presents an enhanced stacked U-Net deep learning model for cuffless blood pressure estimation using only photoplethysmogram signals, aiming to improve the accuracy of non-invasive measurements. Methods: To address the challenges of systolic blood pressure estimation, the model incorporates velocity plethysmogram input and employs additive spatial and channel attention mechanisms. These enhancements improve feature extraction and mitigate decoder mismatches in the U-Net architecture. Results: The model satisfies the Grade A criteria established by the British Hypertension Society and meets the accuracy standards of the Association for the Advancement of Medical Instrumentation, achieving mean absolute errors of 3.921 mmHg for systolic and 2.441 mmHg for diastolic blood pressure. It outperforms PPG-only spectro-temporal methods and achieves comparable performance to the joint photoplethysmogram and electrocardiogram one-dimensional squeeze-and-excitation network with long short-term memory architecture. Conclusions: The proposed model shows strong potential as a practical, low-cost, and non-invasive solution for continuous, cuffless blood pressure monitoring.
目的:本研究提出了一种增强的堆叠U-Net深度学习模型,用于仅使用光容积图信号进行无袖扣血压估计,旨在提高非侵入性测量的准确性。方法:为了解决收缩压估计的挑战,该模型结合了速度容积图输入,并采用了加性空间和通道注意机制。这些增强改进了U-Net架构中的特征提取和解码器不匹配。结果:该模型符合英国高血压学会制定的A级标准,符合美国医疗器械进步协会的精度标准,收缩压平均绝对误差为3.921 mmHg,舒张压平均绝对误差为2.441 mmHg。该方法优于仅ppg的光谱-时间方法,其性能可与具有长短期记忆结构的关节光电容积图和心电图一维挤压-兴奋网络相媲美。结论:该模型作为一种实用、低成本、无创的连续、无袖带血压监测方案,具有很强的潜力。
{"title":"Development of an Improved Stacked U-Net Model for Cuffless Blood Pressure Estimation Based on PPG Signals","authors":"Jenn-Kaie Lain;Chung-An Wang;Jun-Hao Xu;Chen-Wei Lee","doi":"10.1109/OJEMB.2025.3624566","DOIUrl":"https://doi.org/10.1109/OJEMB.2025.3624566","url":null,"abstract":"<italic>Goal:</i> This study presents an enhanced stacked U-Net deep learning model for cuffless blood pressure estimation using only photoplethysmogram signals, aiming to improve the accuracy of non-invasive measurements. <italic>Methods:</i> To address the challenges of systolic blood pressure estimation, the model incorporates velocity plethysmogram input and employs additive spatial and channel attention mechanisms. These enhancements improve feature extraction and mitigate decoder mismatches in the U-Net architecture. <italic>Results:</i> The model satisfies the Grade A criteria established by the British Hypertension Society and meets the accuracy standards of the Association for the Advancement of Medical Instrumentation, achieving mean absolute errors of 3.921 mmHg for systolic and 2.441 mmHg for diastolic blood pressure. It outperforms PPG-only spectro-temporal methods and achieves comparable performance to the joint photoplethysmogram and electrocardiogram one-dimensional squeeze-and-excitation network with long short-term memory architecture. <italic>Conclusions:</i> The proposed model shows strong potential as a practical, low-cost, and non-invasive solution for continuous, cuffless blood pressure monitoring.","PeriodicalId":33825,"journal":{"name":"IEEE Open Journal of Engineering in Medicine and Biology","volume":"6 ","pages":"584-590"},"PeriodicalIF":2.9,"publicationDate":"2025-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11215636","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145455918","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hierarchical Cross-Consistency Network Based Unsupervised Domain Adaptation for Pathology Whole Slide Image Segmentation 基于层次交叉一致性网络的无监督域自适应病理切片图像分割
IF 2.9 Q3 ENGINEERING, BIOMEDICAL Pub Date : 2025-10-23 DOI: 10.1109/OJEMB.2025.3624582
Chien-Yu Chiou;Wei-Li Chen;Chun-Rong Huang;Yang C. Fann;Lawrence L. Latour;Pau-Choo Chung
Goal: Pathology images collected from different hospitals often have large appearance variability causedby different scanners, patients, or hospital protocols. Deep learning-based pathology segmentation models are highly dependent on the distribution of training data. Therefore, the models often suffer from the domain shift problem when applied to new target domains of different hospitals. Methods: To address this issue, we propose a hierarchical cross-consistency (HCC) network to hierarchically adapt models across pathology images of various domains with three consistency-based modules, the consistency module, the pair module, and the mixture module. The consistency module enhances the prediction consistency of each target image under various perturbations. The pair module improves consistency among different target images. Finally, the mixture module enhances the consistency across different domains. Results: The experimental results on pathology image datasets scanned using three different scanners show the superiority of the proposed HCC network compared to state-of-the-art unsupervised domain adaptation methods. Conclusions: The proposed method can successfully adapt trained pathology image segmentation models to new target domains, which is useful when introducing the models to different hospitals.
目的:从不同医院收集的病理图像通常有很大的外观差异,这是由不同的扫描仪、患者或医院协议引起的。基于深度学习的病理分割模型高度依赖于训练数据的分布。因此,该模型在应用于不同医院的新目标域时,往往存在域漂移问题。方法:为了解决这一问题,我们提出了一个分层交叉一致性(HCC)网络,通过三个基于一致性的模块,一致性模块,配对模块和混合模块,分层地适应不同领域病理图像的模型。一致性模块增强了各目标图像在各种扰动下的预测一致性。pair模块提高了不同目标图像之间的一致性。最后,混合模块增强了不同领域的一致性。结果:使用三种不同的扫描仪扫描病理图像数据集的实验结果表明,与最先进的无监督域自适应方法相比,所提出的HCC网络具有优势。结论:本文提出的方法可以成功地将训练好的病理图像分割模型适应新的目标域,为将模型引入不同的医院提供了参考。
{"title":"Hierarchical Cross-Consistency Network Based Unsupervised Domain Adaptation for Pathology Whole Slide Image Segmentation","authors":"Chien-Yu Chiou;Wei-Li Chen;Chun-Rong Huang;Yang C. Fann;Lawrence L. Latour;Pau-Choo Chung","doi":"10.1109/OJEMB.2025.3624582","DOIUrl":"https://doi.org/10.1109/OJEMB.2025.3624582","url":null,"abstract":"<italic>Goal:</i> Pathology images collected from different hospitals often have large appearance variability causedby different scanners, patients, or hospital protocols. Deep learning-based pathology segmentation models are highly dependent on the distribution of training data. Therefore, the models often suffer from the domain shift problem when applied to new target domains of different hospitals. <italic>Methods:</i> To address this issue, we propose a hierarchical cross-consistency (HCC) network to hierarchically adapt models across pathology images of various domains with three consistency-based modules, the consistency module, the pair module, and the mixture module. The consistency module enhances the prediction consistency of each target image under various perturbations. The pair module improves consistency among different target images. Finally, the mixture module enhances the consistency across different domains. <italic>Results:</i> The experimental results on pathology image datasets scanned using three different scanners show the superiority of the proposed HCC network compared to state-of-the-art unsupervised domain adaptation methods. <italic>Conclusions:</i> The proposed method can successfully adapt trained pathology image segmentation models to new target domains, which is useful when introducing the models to different hospitals.","PeriodicalId":33825,"journal":{"name":"IEEE Open Journal of Engineering in Medicine and Biology","volume":"6 ","pages":"598-604"},"PeriodicalIF":2.9,"publicationDate":"2025-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11215652","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145560627","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatic Detection of Gait Perturbations With Everyday Wearable Technology 日常可穿戴技术的步态扰动自动检测。
IF 2.9 Q3 ENGINEERING, BIOMEDICAL Pub Date : 2025-10-23 DOI: 10.1109/OJEMB.2025.3624591
L. Feld;S. Hellmers;L. Schell-Majoor;J. Koschate-Storm;T. Zieschang;A. Hein;B. Kollmeier
Objective: Older adults face a heightened fall risk, which can severely impact their health. Individual responses to unexpected gait perturbations (e.g., slips) are potential predictors of this risk. This study examines automatic detection of treadmill-generated gait perturbations using acceleration and angular velocity from everyday wearables. Detection is achieved using a deep convolutional long short-term memory (DeepConvLSTM) algorithm. Results: An F1 score of at least 0.68 and recall of 0.86 was retrieved for all data, i.e., data from hearing aids, smartphones at various positions and professional sensors at lumbar and sternum. Performance did not significantly change when combining data from different sensor positions or using only acceleration data. Conclusion: Results suggest that hearing aids and smartphones can monitor gait perturbations with similar performance as professional equipment, highlighting the potential of everyday wearables for continuous fall risk monitoring.
目的:老年人面临着更高的跌倒风险,这可能严重影响他们的健康。个体对意外步态扰动(如滑倒)的反应是这种风险的潜在预测因素。本研究使用日常可穿戴设备的加速度和角速度来检测跑步机产生的步态扰动。检测使用深度卷积长短期记忆(DeepConvLSTM)算法实现。结果:所有数据的F1得分至少为0.68,召回率为0.86,即来自助听器、不同体位的智能手机和腰椎和胸骨的专业传感器的数据。当结合来自不同传感器位置的数据或仅使用加速度数据时,性能没有显着变化。结论:研究结果表明,助听器和智能手机可以监测步态扰动,其性能与专业设备相似,突出了日常可穿戴设备持续监测跌倒风险的潜力。
{"title":"Automatic Detection of Gait Perturbations With Everyday Wearable Technology","authors":"L. Feld;S. Hellmers;L. Schell-Majoor;J. Koschate-Storm;T. Zieschang;A. Hein;B. Kollmeier","doi":"10.1109/OJEMB.2025.3624591","DOIUrl":"10.1109/OJEMB.2025.3624591","url":null,"abstract":"<italic>Objective:</i> Older adults face a heightened fall risk, which can severely impact their health. Individual responses to unexpected gait perturbations (e.g., slips) are potential predictors of this risk. This study examines automatic detection of treadmill-generated gait perturbations using acceleration and angular velocity from everyday wearables. Detection is achieved using a deep convolutional long short-term memory (DeepConvLSTM) algorithm. <italic>Results:</i> An F1 score of at least 0.68 and recall of 0.86 was retrieved for all data, i.e., data from hearing aids, smartphones at various positions and professional sensors at lumbar and sternum. Performance did not significantly change when combining data from different sensor positions or using only acceleration data. <italic>Conclusion:</i> Results suggest that hearing aids and smartphones can monitor gait perturbations with similar performance as professional equipment, highlighting the potential of everyday wearables for continuous fall risk monitoring.","PeriodicalId":33825,"journal":{"name":"IEEE Open Journal of Engineering in Medicine and Biology","volume":"6 ","pages":"570-575"},"PeriodicalIF":2.9,"publicationDate":"2025-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12599889/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145497023","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Open Journal of Engineering in Medicine and Biology
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1