首页 > 最新文献

IEEE Journal of Translational Engineering in Health and Medicine-Jtehm最新文献

英文 中文
Automated Evaluation of Urodynamic Examinations Through Local Linear Models: Validation on Spinal Cord Injury Individuals 通过局部线性模型自动评估尿动力学检查:脊髓损伤个体的验证
IF 3.7 3区 医学 Q2 ENGINEERING, BIOMEDICAL Pub Date : 2025-02-21 DOI: 10.1109/JTEHM.2025.3544486
Wensi Zhang;Jürgen Pannek;Jens Wöllner;Robert Riener;Diego Paez-Granados
Objective: Investigating consistent methods and metrics for classifying Detrusor Overactivity (DO) events and developing an automated robust method for clinical measurements calculation from cystometry data in persons with spinal cord injury (SCI).Methods and procedures: A two-stage method for was proposed to detect DO events. In the first stage, DO peaks were detected using local linear models combined with thresholding criteria derived from clinical definitions and known artifacts. In the second stage, a segmentation method was proposed to detect the start and end time points of each DO event, marking the DO activity periods. As a result, complete clinical measurements, including bladder compliance, can be estimated automatically. The method was developed and tested on 77 anonymized urodynamic samples from SCI individuals (40 DO-positive, 37 DO-negative) with 158 annotated DO events.Results: On test data, in terms of the patient-level diagnosis of DO, the proposed method achieved an accuracy of 100%. Individual DO event detection achieved an average precision of 0.94 and recall of 0.72. Detrusor activity period identification showed a precision of 0.86 and a recall of 0.88. The task of automated bladder compliance estimation showed that the point-value-based method yields a lower median absolute error (MAE) compared to the proposed line-fitting-based method, with a MAE of 5.20 and 7.14 ml/cmH2O, respectively. Finally, for classifying bladder function into normal, low and severely low compliance, the proposed method had an accuracy of 88%.Conclusion: Our proposed local model fitting with thresholding based on clinical knowledge, achieved accurate automated results for cytometry data, which will enable objective assessment of routinely performed examinations.Clinical and Translational Impact Statement—This work proposes a fully automated detrusor overactivity diagnosis and feature extraction method. Empowering medical teams to consistently assess urodynamic studies while aiding disease characterization and enhancing clinical decision-making for SCI patients. Furthermore, it provides a mathematically defined method for extending the pipeline to other populations and standardizing clinical assessments.Category: Clinical Engineering, Medical Devices and Systems.
目的:研究对逼尿肌过度活动(DO)事件进行分类的一致方法和指标,并开发一种根据脊髓损伤(SCI)患者膀胱测量数据进行临床测量计算的自动化可靠方法。方法和步骤:提出了一种两阶段检测DO事件的方法。在第一阶段,使用结合临床定义和已知伪影的阈值标准的局部线性模型检测DO峰值。在第二阶段,提出了一种分割方法,检测每个DO事件的开始和结束时间点,标记DO活动周期。因此,完整的临床测量,包括膀胱顺应性,可以自动估计。该方法在来自SCI患者的77例匿名尿动力学样本(40例DO阳性,37例DO阴性)中进行了开发和测试,其中158例注释了DO事件。结果:在测试数据上,对于患者层面的DO诊断,本文提出的方法准确率达到100%。单个DO事件检测平均精密度为0.94,召回率为0.72。逼尿肌活动期的识别精度为0.86,召回率为0.88。自动膀胱顺应性估计任务表明,与基于线拟合的方法相比,基于点值的方法产生更低的中位绝对误差(MAE), MAE分别为5.20和7.14 ml/cmH2O。最后,将膀胱功能分为正常、低依从性和严重低依从性,该方法的准确率为88%。结论:我们提出的基于临床知识的阈值局部模型拟合,实现了细胞计数数据的准确自动化结果,这将使常规检查的客观评估成为可能。临床和翻译影响声明-本工作提出了一种全自动逼尿肌过度活动诊断和特征提取方法。授权医疗团队一致评估尿动力学研究,同时帮助疾病表征和加强脊髓损伤患者的临床决策。此外,它提供了一个数学上定义的方法来扩展管道到其他人群和标准化临床评估。分类:临床工程,医疗设备和系统。
{"title":"Automated Evaluation of Urodynamic Examinations Through Local Linear Models: Validation on Spinal Cord Injury Individuals","authors":"Wensi Zhang;Jürgen Pannek;Jens Wöllner;Robert Riener;Diego Paez-Granados","doi":"10.1109/JTEHM.2025.3544486","DOIUrl":"https://doi.org/10.1109/JTEHM.2025.3544486","url":null,"abstract":"Objective: Investigating consistent methods and metrics for classifying Detrusor Overactivity (DO) events and developing an automated robust method for clinical measurements calculation from cystometry data in persons with spinal cord injury (SCI).Methods and procedures: A two-stage method for was proposed to detect DO events. In the first stage, DO peaks were detected using local linear models combined with thresholding criteria derived from clinical definitions and known artifacts. In the second stage, a segmentation method was proposed to detect the start and end time points of each DO event, marking the DO activity periods. As a result, complete clinical measurements, including bladder compliance, can be estimated automatically. The method was developed and tested on 77 anonymized urodynamic samples from SCI individuals (40 DO-positive, 37 DO-negative) with 158 annotated DO events.Results: On test data, in terms of the patient-level diagnosis of DO, the proposed method achieved an accuracy of 100%. Individual DO event detection achieved an average precision of 0.94 and recall of 0.72. Detrusor activity period identification showed a precision of 0.86 and a recall of 0.88. The task of automated bladder compliance estimation showed that the point-value-based method yields a lower median absolute error (MAE) compared to the proposed line-fitting-based method, with a MAE of 5.20 and 7.14 ml/cmH2O, respectively. Finally, for classifying bladder function into normal, low and severely low compliance, the proposed method had an accuracy of 88%.Conclusion: Our proposed local model fitting with thresholding based on clinical knowledge, achieved accurate automated results for cytometry data, which will enable objective assessment of routinely performed examinations.<bold><i>Clinical and Translational Impact Statement—</i></b>This work proposes a fully automated detrusor overactivity diagnosis and feature extraction method. Empowering medical teams to consistently assess urodynamic studies while aiding disease characterization and enhancing clinical decision-making for SCI patients. Furthermore, it provides a mathematically defined method for extending the pipeline to other populations and standardizing clinical assessments.Category: Clinical Engineering, Medical Devices and Systems.","PeriodicalId":54255,"journal":{"name":"IEEE Journal of Translational Engineering in Health and Medicine-Jtehm","volume":"13 ","pages":"111-122"},"PeriodicalIF":3.7,"publicationDate":"2025-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10897996","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143688153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Temporal Relation Modeling and Multimodal Adversarial Alignment Network for Pilot Workload Evaluation 飞行员工作量评估的时间关系建模和多模态对抗对齐网络
IF 3.7 3区 医学 Q2 ENGINEERING, BIOMEDICAL Pub Date : 2025-02-14 DOI: 10.1109/JTEHM.2025.3542408
Xinhui Li;Ao Li;Wenyu Fu;Xun Song;Fan Li;Qiang Ma;Yong Peng;Zhao LV
Pilots face complex working environments during flight missions, which can easily lead to excessive workload and affect flight safety. Physiological signals are commonly used to evaluate a pilot’s workload because they are objective and can directly reflect physiological mental states. However, existing methods have shortcomings in temporal modeling, making it challenging to fully capture the dynamic characteristics of physiological signals over time. Moreover, fusing features of data from different modalities is also difficult.To address these problems, we proposed a temporal relation modeling and multimodal adversarial alignment network (TRM-MAAN) for pilot workload evaluation. Specifically, a Transformer-based temporal relationship modeling module was used to learn complex temporal relationships for better feature extraction. In addition, an adversarial alignment-based multi-modal fusion module was applied to capture and integrate multi-modal information, reducing distribution shifts between different modalities. The performance of the proposed TRM-MAAN method was evaluated via experiments of classifying three workload states using electroencephalogram (EEG) and electromyography (EMG) recordings of eight healthy pilots.Experimental results showed that the classification accuracy and F1 score of the proposed method were significantly better than the baseline model across different subjects, with an average recognition accuracy of $91.90~pm ~1.72%$ and an F1 score of $91.86~pm ~1.75%$ .This work provides essential technical support for improving the accuracy and robustness of pilot workload evaluation and introduces a promising way for enhancing flight safety, offering broad application prospects. Clinical and Translational Impact Statement: The proposed scheme provides a promising solution for workload evaluation based on electrophysiological signals, with potential applications in aiding the clinical monitoring of fatigue, mental status, cognitive psychology, and other disorders.
飞行员在执行飞行任务时面临复杂的工作环境,容易导致工作负荷过大,影响飞行安全。由于生理信号客观,能直接反映飞行员的生理心理状态,因此常被用来评估飞行员的工作负荷。然而,现有方法在时间建模方面存在不足,难以充分捕捉生理信号随时间的动态特征。此外,融合不同模态数据的特征也很困难。为了解决这些问题,我们提出了一个时间关系建模和多模态对抗对齐网络(TRM-MAAN)用于试点工作量评估。具体来说,使用基于transformer的时间关系建模模块来学习复杂的时间关系,以便更好地提取特征。此外,基于对抗性对齐的多模态融合模块用于捕获和整合多模态信息,减少了不同模态之间的分布偏移。通过8名健康飞行员的脑电图(EEG)和肌电图(EMG)记录对三种工作负荷状态进行分类的实验,评估了所提出的TRM-MAAN方法的性能。实验结果表明,该方法的分类准确率和F1分数在不同学科上均显著优于基线模型,平均识别准确率为91.90~pm ~ 1.72% $, F1分数为91.86~pm ~1.75 %$,为提高飞行员工作负荷评估的准确性和鲁棒性提供了必要的技术支持,为提高飞行安全提供了一条有希望的途径,具有广阔的应用前景。临床和转化影响声明:该方案为基于电生理信号的工作量评估提供了一个有希望的解决方案,在辅助疲劳、精神状态、认知心理和其他疾病的临床监测方面具有潜在的应用前景。
{"title":"Temporal Relation Modeling and Multimodal Adversarial Alignment Network for Pilot Workload Evaluation","authors":"Xinhui Li;Ao Li;Wenyu Fu;Xun Song;Fan Li;Qiang Ma;Yong Peng;Zhao LV","doi":"10.1109/JTEHM.2025.3542408","DOIUrl":"https://doi.org/10.1109/JTEHM.2025.3542408","url":null,"abstract":"Pilots face complex working environments during flight missions, which can easily lead to excessive workload and affect flight safety. Physiological signals are commonly used to evaluate a pilot’s workload because they are objective and can directly reflect physiological mental states. However, existing methods have shortcomings in temporal modeling, making it challenging to fully capture the dynamic characteristics of physiological signals over time. Moreover, fusing features of data from different modalities is also difficult.To address these problems, we proposed a temporal relation modeling and multimodal adversarial alignment network (TRM-MAAN) for pilot workload evaluation. Specifically, a Transformer-based temporal relationship modeling module was used to learn complex temporal relationships for better feature extraction. In addition, an adversarial alignment-based multi-modal fusion module was applied to capture and integrate multi-modal information, reducing distribution shifts between different modalities. The performance of the proposed TRM-MAAN method was evaluated via experiments of classifying three workload states using electroencephalogram (EEG) and electromyography (EMG) recordings of eight healthy pilots.Experimental results showed that the classification accuracy and F1 score of the proposed method were significantly better than the baseline model across different subjects, with an average recognition accuracy of <inline-formula> <tex-math>$91.90~pm ~1.72%$ </tex-math></inline-formula> and an F1 score of <inline-formula> <tex-math>$91.86~pm ~1.75%$ </tex-math></inline-formula>.This work provides essential technical support for improving the accuracy and robustness of pilot workload evaluation and introduces a promising way for enhancing flight safety, offering broad application prospects. Clinical and Translational Impact Statement: The proposed scheme provides a promising solution for workload evaluation based on electrophysiological signals, with potential applications in aiding the clinical monitoring of fatigue, mental status, cognitive psychology, and other disorders.","PeriodicalId":54255,"journal":{"name":"IEEE Journal of Translational Engineering in Health and Medicine-Jtehm","volume":"13 ","pages":"85-97"},"PeriodicalIF":3.7,"publicationDate":"2025-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10890988","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143553344","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Quantification of Motor Learning in Hand Adjustability Movements: An Evaluation Variable for Discriminant Cognitive Decline 手部可调节性动作的运动学习量化:判别性认知衰退的评估变量
IF 3.7 3区 医学 Q2 ENGINEERING, BIOMEDICAL Pub Date : 2025-02-10 DOI: 10.1109/JTEHM.2025.3540203
Kazuya Toshima;Yu Chokki;Toshiaki Wasaka;Tsukasa Tamaru;Yoshifumi Morita
Objective: Mild cognitive impairment (MCI) is characterized by early symptoms of attentional decline and may be distinguished through motor learning results. A relationship was reported between dexterous hand movements and cognitive function in older adults. Therefore, this study focuses on motor learning involving dexterous hand movements. As motor learning engages two distinct types of attention, external and internal, we aimed to develop an evaluation method that separates these attentional functions within motor learning. The objective of this study was to develop and verify the effectiveness of this evaluation method. The effectiveness was assessed by comparing two motor learning variables between a normal cognitive (NC) and MCI groups. Method: To evaluate motor learning through dexterous hand movements, we utilized the iWakka device. Two types of visual tracking tasks, repeat and random, were designed to evaluate motor learning from different aspects. The tracking errors in both tasks were quantitatively measured, and the initial and final improvement rates during motor learning were defined as the evaluation variables. The study included 28 MCI participants and 40 NC participants, and the effectiveness of the proposed method was verified by comparing results between the groups. Results: The repeat task revealed a significantly lower learning rate in MCI participants (p <0.01). In contrast, no significant difference was observed between MCI and NC participants in the random task (p =0.67). Conclusion: The evaluation method proposed in this study demonstrated the possibility of obtaining evaluation variables that indicate the characteristics of MCI. Clinical Impact: The methods proposed in this work are clinically relevant because the proposed evaluation system can make evaluation variables for discriminating cognitive decline in MCI. That it, the proposed approach can also be used to provide discrimination for cognitive decline in MCI.
目的:轻度认知障碍(MCI)的特点是早期出现注意力下降的症状,并可通过运动学习结果加以区分。有报道称,老年人的灵巧手部动作与认知功能之间存在关系。因此,本研究侧重于涉及灵巧手部动作的运动学习。由于运动学习涉及外部和内部两种不同类型的注意力,我们旨在开发一种评估方法,将运动学习中的这些注意力功能区分开来。本研究的目的是开发并验证这种评估方法的有效性。通过比较正常认知(NC)组和 MCI 组的两个运动学习变量来评估其有效性。研究方法为了通过灵巧的手部动作评估运动学习,我们使用了 iWakka 设备。我们设计了重复和随机两种视觉跟踪任务,以从不同方面评估运动学习。我们对这两种任务中的跟踪误差进行了定量测量,并将运动学习过程中的初始改善率和最终改善率定义为评估变量。研究对象包括 28 名 MCI 参与者和 40 名 NC 参与者,通过比较两组之间的结果验证了所建议方法的有效性。结果显示重复任务显示 MCI 参与者的学习率明显较低(P <0.01)。相反,在随机任务中,MCI 和 NC 参与者之间没有观察到明显差异(P =0.67)。结论本研究提出的评估方法表明,有可能获得显示 MCI 特征的评估变量。临床影响:本研究提出的方法具有临床意义,因为所提出的评价系统可以得出用于判别 MCI 认知功能下降的评价变量。因此,所提出的方法也可用于鉴别 MCI 患者的认知功能衰退。
{"title":"Quantification of Motor Learning in Hand Adjustability Movements: An Evaluation Variable for Discriminant Cognitive Decline","authors":"Kazuya Toshima;Yu Chokki;Toshiaki Wasaka;Tsukasa Tamaru;Yoshifumi Morita","doi":"10.1109/JTEHM.2025.3540203","DOIUrl":"https://doi.org/10.1109/JTEHM.2025.3540203","url":null,"abstract":"Objective: Mild cognitive impairment (MCI) is characterized by early symptoms of attentional decline and may be distinguished through motor learning results. A relationship was reported between dexterous hand movements and cognitive function in older adults. Therefore, this study focuses on motor learning involving dexterous hand movements. As motor learning engages two distinct types of attention, external and internal, we aimed to develop an evaluation method that separates these attentional functions within motor learning. The objective of this study was to develop and verify the effectiveness of this evaluation method. The effectiveness was assessed by comparing two motor learning variables between a normal cognitive (NC) and MCI groups. Method: To evaluate motor learning through dexterous hand movements, we utilized the iWakka device. Two types of visual tracking tasks, repeat and random, were designed to evaluate motor learning from different aspects. The tracking errors in both tasks were quantitatively measured, and the initial and final improvement rates during motor learning were defined as the evaluation variables. The study included 28 MCI participants and 40 NC participants, and the effectiveness of the proposed method was verified by comparing results between the groups. Results: The repeat task revealed a significantly lower learning rate in MCI participants (p <0.01). In contrast, no significant difference was observed between MCI and NC participants in the random task (p =0.67). Conclusion: The evaluation method proposed in this study demonstrated the possibility of obtaining evaluation variables that indicate the characteristics of MCI. Clinical Impact: The methods proposed in this work are clinically relevant because the proposed evaluation system can make evaluation variables for discriminating cognitive decline in MCI. That it, the proposed approach can also be used to provide discrimination for cognitive decline in MCI.","PeriodicalId":54255,"journal":{"name":"IEEE Journal of Translational Engineering in Health and Medicine-Jtehm","volume":"13 ","pages":"75-84"},"PeriodicalIF":3.7,"publicationDate":"2025-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10879071","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143465630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cross-Modal Augmented Transformer for Automated Medical Report Generation 用于自动医疗报告生成的跨模态增强变压器
IF 3.7 3区 医学 Q2 ENGINEERING, BIOMEDICAL Pub Date : 2025-01-29 DOI: 10.1109/JTEHM.2025.3536441
Yuhao Tang;Ye Yuan;Fei Tao;Minghao Tang
In clinical practice, interpreting medical images and composing diagnostic reports typically involve significant manual workload. Therefore, an automated report generation framework that mimics a doctor’s diagnosis better meets the requirements of medical scenarios. Prior investigations often overlook this critical aspect, primarily relying on traditional image captioning frameworks initially designed for general-domain images and sentences. Despite achieving some advancements, these methodologies encounter two primary challenges. First, the strong noise in blurred medical images always hinders the model of capturing the lesion region. Second, during report writing, doctors typically rely on terminology for diagnosis, a crucial aspect that has been neglected in prior frameworks. In this paper, we present a novel approach called Cross-modal Augmented Transformer (CAT) for medical report generation. Unlike previous methods that rely on coarse-grained features without human intervention, our method introduces a “locate then generate” pattern, thereby improving the interpretability of the generated reports. During the locate stage, CAT captures crucial representations by pre-aligning significant patches and their corresponding medical terminologies. This pre-alignment helps reduce visual noise by discarding low-ranking content, ensuring that only relevant information is considered in the report generation process. During the generation phase, CAT utilizes a multi-modality encoder to reinforce the correlation between generated keywords, retrieved terminologies and regions. Furthermore, CAT employs a dual-stream decoder that dynamically determines whether the predicted word should be influenced by the retrieved terminology or the preceding sentence. Experimental results demonstrate the effectiveness of the proposed method on two datasets.Clinical impact: This work aims to design an automated framework for explaining medical images to evaluate the health status of individuals, thereby facilitating their broader application in clinical settings.Clinical and Translational Impact Statement: In our preclinical research, we develop an automated system for generating diagnostic reports. This system mimics manual diagnostic methods by combining fine-grained semantic alignment with dual-stream decoders.
在临床实践中,解释医学图像和撰写诊断报告通常涉及大量的手工工作量。因此,模仿医生诊断的自动化报告生成框架更能满足医疗场景的需求。之前的研究往往忽略了这一关键方面,主要依赖于传统的图像标题框架,最初是为一般领域的图像和句子设计的。尽管取得了一些进展,但这些方法遇到了两个主要挑战。首先,模糊医学图像中较强的噪声会阻碍模型对病灶区域的捕捉。其次,在撰写报告时,医生通常依赖于诊断术语,这是先前框架中被忽视的一个关键方面。在本文中,我们提出了一种新的方法,称为跨模态增强变压器(CAT)的医疗报告生成。与以前依赖于粗粒度特征而没有人为干预的方法不同,我们的方法引入了“定位然后生成”模式,从而提高了生成报告的可解释性。在定位阶段,CAT通过预先对齐重要补丁及其相应的医学术语来捕获关键表征。这种预对齐通过丢弃低排名的内容来帮助减少视觉噪音,确保在报告生成过程中只考虑相关的信息。在生成阶段,CAT使用多模态编码器来加强生成的关键字、检索的术语和区域之间的相关性。此外,CAT采用双流解码器,动态地确定预测的单词是否应该受到检索术语或前一句的影响。实验结果证明了该方法在两个数据集上的有效性。临床影响:这项工作旨在设计一个自动化框架来解释医学图像,以评估个人的健康状况,从而促进其在临床环境中的更广泛应用。临床和转化影响声明:在我们的临床前研究中,我们开发了一个自动生成诊断报告的系统。该系统通过将细粒度语义对齐与双流解码器相结合来模拟人工诊断方法。
{"title":"Cross-Modal Augmented Transformer for Automated Medical Report Generation","authors":"Yuhao Tang;Ye Yuan;Fei Tao;Minghao Tang","doi":"10.1109/JTEHM.2025.3536441","DOIUrl":"https://doi.org/10.1109/JTEHM.2025.3536441","url":null,"abstract":"In clinical practice, interpreting medical images and composing diagnostic reports typically involve significant manual workload. Therefore, an automated report generation framework that mimics a doctor’s diagnosis better meets the requirements of medical scenarios. Prior investigations often overlook this critical aspect, primarily relying on traditional image captioning frameworks initially designed for general-domain images and sentences. Despite achieving some advancements, these methodologies encounter two primary challenges. First, the strong noise in blurred medical images always hinders the model of capturing the lesion region. Second, during report writing, doctors typically rely on terminology for diagnosis, a crucial aspect that has been neglected in prior frameworks. In this paper, we present a novel approach called Cross-modal Augmented Transformer (CAT) for medical report generation. Unlike previous methods that rely on coarse-grained features without human intervention, our method introduces a “locate then generate” pattern, thereby improving the interpretability of the generated reports. During the locate stage, CAT captures crucial representations by pre-aligning significant patches and their corresponding medical terminologies. This pre-alignment helps reduce visual noise by discarding low-ranking content, ensuring that only relevant information is considered in the report generation process. During the generation phase, CAT utilizes a multi-modality encoder to reinforce the correlation between generated keywords, retrieved terminologies and regions. Furthermore, CAT employs a dual-stream decoder that dynamically determines whether the predicted word should be influenced by the retrieved terminology or the preceding sentence. Experimental results demonstrate the effectiveness of the proposed method on two datasets.Clinical impact: This work aims to design an automated framework for explaining medical images to evaluate the health status of individuals, thereby facilitating their broader application in clinical settings.Clinical and Translational Impact Statement: In our preclinical research, we develop an automated system for generating diagnostic reports. This system mimics manual diagnostic methods by combining fine-grained semantic alignment with dual-stream decoders.","PeriodicalId":54255,"journal":{"name":"IEEE Journal of Translational Engineering in Health and Medicine-Jtehm","volume":"13 ","pages":"33-48"},"PeriodicalIF":3.7,"publicationDate":"2025-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10857391","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143379493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-Branch CNN-LSTM Fusion Network-Driven System With BERT Semantic Evaluator for Radiology Reporting in Emergency Head CTs 基于BERT语义评估器的多分支CNN-LSTM融合网络驱动系统在急诊头部ct中的放射学报告
IF 3.7 3区 医学 Q2 ENGINEERING, BIOMEDICAL Pub Date : 2025-01-28 DOI: 10.1109/JTEHM.2025.3535676
Selene Tomassini;Damiano Duranti;Abdallah Zeggada;Carlo Cosimo Quattrocchi;Farid Melgani;Paolo Giorgini
The high volume of emergency room patients often necessitates head CT examinations to rule out ischemic, hemorrhagic, or other organic pathologies. A system that enhances the diagnostic efficacy of head CT imaging in emergency settings through structured reporting would significantly improve clinical decision making. Currently, no AI solutions address this need. Thus, our research aims to develop an automatic radiology reporting system by directly analyzing brain anomalies in head CT data. We propose a multi-branch CNN-LSTM fusion network-driven system for enhanced radiology reporting in emergency settings. We preprocessed head CT scans by resizing all slices, selecting those with significant variability, and applying PCA to retain 95% of the original data variance, ultimately saving the most representative five slices for each scan. We linked the reports to their respective slice IDs, divided them into individual captions, and preprocessed each. We performed an 80-20 split of the dataset for ten times, with 15% of the training set used for validation. Our model utilizes a pretrained VGG16, processing groups of five slices simultaneously, and features multiple end-to-end LSTM branches, each specialized in predicting one caption, subsequently combined to form the ordered reports after a BERT-based semantic evaluation. Our system demonstrates effectiveness and stability, with the postprocessing stage refining the syntax of the generated descriptions. However, there remains an opportunity to empower the evaluation framework to more accurately assess the clinical relevance of the automatically-written reports. Part of future work will include transitioning to 3D and developing an improved version based on vision-language models.
急诊病人的高容量往往需要头部CT检查,以排除缺血性、出血或其他器质性病变。一个通过结构化报告来提高紧急情况下头部CT成像诊断效率的系统将显著改善临床决策。目前,还没有人工智能解决方案能够满足这一需求。因此,我们的研究旨在通过直接分析头部CT数据中的脑异常来开发一个自动放射学报告系统。我们提出了一个多分支CNN-LSTM融合网络驱动系统,用于增强紧急情况下的放射学报告。我们对头部CT扫描进行预处理,通过调整所有切片的大小,选择具有显著变异性的切片,并应用PCA保留95%的原始数据方差,最终为每次扫描保留最具代表性的5个切片。我们将报告链接到它们各自的切片id,将它们分成单独的标题,并对每个标题进行预处理。我们对数据集进行了10次80-20分割,其中15%的训练集用于验证。我们的模型使用预训练的VGG16,同时处理5个切片组,并具有多个端到端的LSTM分支,每个分支专门预测一个标题,随后在基于bert的语义评估后组合成有序的报告。我们的系统证明了有效性和稳定性,后处理阶段改进了生成的描述的语法。然而,仍然有机会使评估框架更准确地评估自动编写报告的临床相关性。未来的部分工作将包括过渡到3D和开发基于视觉语言模型的改进版本。
{"title":"Multi-Branch CNN-LSTM Fusion Network-Driven System With BERT Semantic Evaluator for Radiology Reporting in Emergency Head CTs","authors":"Selene Tomassini;Damiano Duranti;Abdallah Zeggada;Carlo Cosimo Quattrocchi;Farid Melgani;Paolo Giorgini","doi":"10.1109/JTEHM.2025.3535676","DOIUrl":"https://doi.org/10.1109/JTEHM.2025.3535676","url":null,"abstract":"The high volume of emergency room patients often necessitates head CT examinations to rule out ischemic, hemorrhagic, or other organic pathologies. A system that enhances the diagnostic efficacy of head CT imaging in emergency settings through structured reporting would significantly improve clinical decision making. Currently, no AI solutions address this need. Thus, our research aims to develop an automatic radiology reporting system by directly analyzing brain anomalies in head CT data. We propose a multi-branch CNN-LSTM fusion network-driven system for enhanced radiology reporting in emergency settings. We preprocessed head CT scans by resizing all slices, selecting those with significant variability, and applying PCA to retain 95% of the original data variance, ultimately saving the most representative five slices for each scan. We linked the reports to their respective slice IDs, divided them into individual captions, and preprocessed each. We performed an 80-20 split of the dataset for ten times, with 15% of the training set used for validation. Our model utilizes a pretrained VGG16, processing groups of five slices simultaneously, and features multiple end-to-end LSTM branches, each specialized in predicting one caption, subsequently combined to form the ordered reports after a BERT-based semantic evaluation. Our system demonstrates effectiveness and stability, with the postprocessing stage refining the syntax of the generated descriptions. However, there remains an opportunity to empower the evaluation framework to more accurately assess the clinical relevance of the automatically-written reports. Part of future work will include transitioning to 3D and developing an improved version based on vision-language models.","PeriodicalId":54255,"journal":{"name":"IEEE Journal of Translational Engineering in Health and Medicine-Jtehm","volume":"13 ","pages":"61-74"},"PeriodicalIF":3.7,"publicationDate":"2025-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10856282","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143403879","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Intelligent Neonatal Blood Perfusion Assessment System Based on Near-Infrared Spectroscopy 基于近红外光谱的智能新生儿血流灌注评估系统
IF 3.7 3区 医学 Q2 ENGINEERING, BIOMEDICAL Pub Date : 2025-01-22 DOI: 10.1109/JTEHM.2025.3532801
Hsiu-Lin Chen;Bor-Shing Lin;Chieh-Miao Chang;Hao-Wei Chung;Shu-Ting Yang;Bor-Shyh Lin
High-risk infants in the neonatal intensive care unit often encounter the problems with hemodynamic instability, and the poor blood circulation may cause shock or other sequelae. But the appearance of shock is not easy to be noticed in the initial stage, and most of the clinical judgments are subjectively dependent on the experienced physicians. Therefore, how to effectively evaluate the neonatal blood circulation state is important for the treatment in time. Although some instruments, such as laser Doppler flow meter, can estimate the information of blood flow, there is still lack of monitoring systems to evaluate the neonatal blood circulation directly. Based on the technique of near-infrared spectroscopy, an intelligent neonatal blood perfusion assessment system was proposed in this study, to monitor the changes of hemoglobin concentration and tissue oxygen saturation simultaneously and further estimate the neonatal blood perfusion. Several indexes were defined from the changes of hemoglobin parameters under applying and relaxing pressure to obtain the neonatal perfusion information. Moreover, the neural network-based classifier was also used to effectively classify the groups with different blood perfusion states. From the experimental results, the difference between the groups with different blood perfusion states could exactly be reflected on several defined indexes and could be effectively recognized by using the technique of neural network. Clinical and Translational Impact Statement—An intelligent neonatal blood perfusion assessment system was proposed to monitor the changes of hemoglobin concentration and tissue oxygen saturation simultaneously and further estimate the neonatal blood perfusion (Category: Preclinical Research)
高危儿在新生儿重症监护室经常遇到血流动力学不稳定的问题,血液循环不畅可能引起休克或其他后遗症。但休克的表现在初期不易被注意到,临床判断大多主观依赖有经验的医师。因此,如何有效评估新生儿血液循环状态对及时治疗具有重要意义。虽然一些仪器,如激光多普勒血流仪可以估计血流信息,但仍然缺乏直接评估新生儿血液循环的监测系统。本研究基于近红外光谱技术,提出了一种智能新生儿血液灌注评估系统,可同时监测血红蛋白浓度和组织血氧饱和度的变化,进一步估计新生儿血液灌注情况。根据施加压力和放松压力下血红蛋白参数的变化定义几个指标,获得新生儿血流灌注信息。此外,还利用基于神经网络的分类器对不同血流灌注状态的组进行了有效的分类。从实验结果来看,不同血流状态组之间的差异可以准确地反映在几个定义的指标上,并且可以通过神经网络技术有效地识别。临床与转化影响声明-提出一种智能新生儿血液灌注评估系统,用于同时监测血红蛋白浓度和组织氧饱和度的变化,进一步评估新生儿血液灌注情况(类别:临床前研究)
{"title":"Intelligent Neonatal Blood Perfusion Assessment System Based on Near-Infrared Spectroscopy","authors":"Hsiu-Lin Chen;Bor-Shing Lin;Chieh-Miao Chang;Hao-Wei Chung;Shu-Ting Yang;Bor-Shyh Lin","doi":"10.1109/JTEHM.2025.3532801","DOIUrl":"https://doi.org/10.1109/JTEHM.2025.3532801","url":null,"abstract":"High-risk infants in the neonatal intensive care unit often encounter the problems with hemodynamic instability, and the poor blood circulation may cause shock or other sequelae. But the appearance of shock is not easy to be noticed in the initial stage, and most of the clinical judgments are subjectively dependent on the experienced physicians. Therefore, how to effectively evaluate the neonatal blood circulation state is important for the treatment in time. Although some instruments, such as laser Doppler flow meter, can estimate the information of blood flow, there is still lack of monitoring systems to evaluate the neonatal blood circulation directly. Based on the technique of near-infrared spectroscopy, an intelligent neonatal blood perfusion assessment system was proposed in this study, to monitor the changes of hemoglobin concentration and tissue oxygen saturation simultaneously and further estimate the neonatal blood perfusion. Several indexes were defined from the changes of hemoglobin parameters under applying and relaxing pressure to obtain the neonatal perfusion information. Moreover, the neural network-based classifier was also used to effectively classify the groups with different blood perfusion states. From the experimental results, the difference between the groups with different blood perfusion states could exactly be reflected on several defined indexes and could be effectively recognized by using the technique of neural network. Clinical and Translational Impact Statement—An intelligent neonatal blood perfusion assessment system was proposed to monitor the changes of hemoglobin concentration and tissue oxygen saturation simultaneously and further estimate the neonatal blood perfusion (Category: Preclinical Research)","PeriodicalId":54255,"journal":{"name":"IEEE Journal of Translational Engineering in Health and Medicine-Jtehm","volume":"13 ","pages":"23-32"},"PeriodicalIF":3.7,"publicationDate":"2025-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10849653","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143105528","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Design and Development of an Integrated Virtual Reality (VR)-Based Training System for Difficult Airway Management 基于虚拟现实(VR)的气道困难管理综合训练系统设计与开发
IF 3.7 3区 医学 Q2 ENGINEERING, BIOMEDICAL Pub Date : 2025-01-14 DOI: 10.1109/JTEHM.2025.3529748
Saurabh Jain;Bijoy Dripta Barua Chowdhury;Jarrod M. Mosier;Vignesh Subbian;Kate Hughes;Young-Jun Son
For over 40 years, airway management simulation has been a cornerstone of medical training, aiming to reduce procedural risks for critically ill patients. However, existing simulation technologies often lack the versatility and realism needed to replicate the cognitive and physical challenges of complex airway management scenarios. We developed a novel Virtual Reality (VR)-based simulation system designed to enhance immersive airway management training and research. This system integrates physical and virtual environments with an external sensory framework to capture high-fidelity data on user performance. Advanced calibration techniques ensure precise positional tracking and realistic physics-based interactions, providing a cohesive mixed-reality experience. Validation studies conducted in a dedicated medical training center demonstrated the system’s effectiveness in replicating real-world conditions. Positional calibration accuracy was achieved within 0.1 cm, with parameter calibrations showing no significant discrepancies. Validation using Pre- and post-simulation surveys indicated positive feedback on training aspects, perceived usefulness, and ease of use. These results suggest that the system offers a significant improvement in procedural and cognitive training for high-stakes medical environments.
40多年来,气道管理模拟一直是医疗培训的基石,旨在降低危重患者的手术风险。然而,现有的模拟技术往往缺乏复制复杂气道管理场景的认知和物理挑战所需的多功能性和真实感。我们开发了一种新颖的基于虚拟现实(VR)的仿真系统,旨在增强沉浸式气道管理培训和研究。该系统将物理和虚拟环境与外部感官框架集成在一起,以捕获有关用户性能的高保真数据。先进的校准技术确保精确的位置跟踪和现实的基于物理的交互,提供有凝聚力的混合现实体验。在专门的医疗培训中心进行的验证研究证明了该系统在复制现实世界条件方面的有效性。定位校准精度在0.1 cm以内,参数校准无显著差异。使用模拟前和模拟后调查的验证表明,在培训方面,感知有用性和易用性方面有积极的反馈。这些结果表明,该系统为高风险医疗环境的程序和认知训练提供了显著的改进。
{"title":"Design and Development of an Integrated Virtual Reality (VR)-Based Training System for Difficult Airway Management","authors":"Saurabh Jain;Bijoy Dripta Barua Chowdhury;Jarrod M. Mosier;Vignesh Subbian;Kate Hughes;Young-Jun Son","doi":"10.1109/JTEHM.2025.3529748","DOIUrl":"https://doi.org/10.1109/JTEHM.2025.3529748","url":null,"abstract":"For over 40 years, airway management simulation has been a cornerstone of medical training, aiming to reduce procedural risks for critically ill patients. However, existing simulation technologies often lack the versatility and realism needed to replicate the cognitive and physical challenges of complex airway management scenarios. We developed a novel Virtual Reality (VR)-based simulation system designed to enhance immersive airway management training and research. This system integrates physical and virtual environments with an external sensory framework to capture high-fidelity data on user performance. Advanced calibration techniques ensure precise positional tracking and realistic physics-based interactions, providing a cohesive mixed-reality experience. Validation studies conducted in a dedicated medical training center demonstrated the system’s effectiveness in replicating real-world conditions. Positional calibration accuracy was achieved within 0.1 cm, with parameter calibrations showing no significant discrepancies. Validation using Pre- and post-simulation surveys indicated positive feedback on training aspects, perceived usefulness, and ease of use. These results suggest that the system offers a significant improvement in procedural and cognitive training for high-stakes medical environments.","PeriodicalId":54255,"journal":{"name":"IEEE Journal of Translational Engineering in Health and Medicine-Jtehm","volume":"13 ","pages":"49-60"},"PeriodicalIF":3.7,"publicationDate":"2025-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10841389","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143403882","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fusion Model Using Resting Neurophysiological Data to Help Mass Screening of Methamphetamine Use Disorder 利用静息神经生理数据的融合模型帮助大规模筛查甲基苯丙胺使用障碍
IF 3.7 3区 医学 Q2 ENGINEERING, BIOMEDICAL Pub Date : 2024-12-25 DOI: 10.1109/JTEHM.2024.3522356
Chun-Chuan Chen;Meng-Chang Tsai;Eric Hsiao-Kuang Wu;Shao-Rong Sheng;Jia-Jeng Lee;Yung-En Lu;Shih-Ching Yeh
Methamphetamine use disorder (MUD) is a substance use disorder. Because MUD has become more prevalent due to the COVID-19 pandemic, alternative ways to help the efficiency of mass screening of MUD are important. Previous studies used electroencephalogram (EEG), heart rate variability (HRV), and galvanic skin response (GSR) aberrations during the virtual reality (VR) induction of drug craving to accurately separate patients with MUD from the healthy controls. However, whether these abnormalities present without induction of drug-cue reactivity to enable separation between patients and healthy subjects remains unclear. Here, we propose a clinically comparable intelligent system using the fusion of 5–channel EEG, HRV, and GSR data during resting state to aid in detecting MUD. Forty-six patients with MUD and 26 healthy controls were recruited and machine learning methods were employed to systematically compare the classification results of different fusion models. The analytic results revealed that the fusion of HRV and GSR features leads to the most accurate separation rate of 79%. The use of EEG, HRV, and GSR features provides more robust information, leading to relatively similar and enhanced accuracy across different classifiers. In conclusion, we demonstrated that a clinically applicable intelligent system using resting-state EEG, ECG, and GSR features without the induction of drug cue reactivity enhances the detection of MUD. This system is easy to implement in the clinical setting and can save a lot of time on setting up and experimenting while maintaining excellent accuracy to assist in mass screening of MUD.
甲基苯丙胺使用障碍(Methamphetamine use disorder, MUD)是一种物质使用障碍。由于COVID-19大流行使MUD变得更加普遍,因此提高MUD大规模筛查效率的替代方法非常重要。先前的研究使用虚拟现实(VR)诱导药物渴望时的脑电图(EEG)、心率变异性(HRV)和皮肤电反应(GSR)畸变来准确区分MUD患者和健康对照组。然而,这些异常是否没有引起药物提示反应,从而使患者与健康受试者分离,目前尚不清楚。在这里,我们提出了一种临床可比较的智能系统,该系统使用静息状态下的5通道EEG, HRV和GSR数据融合来帮助检测MUD。选取46例MUD患者和26例健康对照者,采用机器学习方法系统比较不同融合模型的分类结果。分析结果表明,HRV和GSR特征的融合使分离准确率达到79%。EEG、HRV和GSR特征的使用提供了更鲁棒的信息,导致不同分类器之间相对相似和提高的准确性。总之,我们证明了一个临床适用的智能系统,利用静息状态EEG, ECG和GSR特征,而不诱导药物线索反应性,可以增强对MUD的检测。该系统易于在临床环境中实施,可以节省大量的设置和实验时间,同时保持良好的准确性,以协助大规模筛查MUD。
{"title":"Fusion Model Using Resting Neurophysiological Data to Help Mass Screening of Methamphetamine Use Disorder","authors":"Chun-Chuan Chen;Meng-Chang Tsai;Eric Hsiao-Kuang Wu;Shao-Rong Sheng;Jia-Jeng Lee;Yung-En Lu;Shih-Ching Yeh","doi":"10.1109/JTEHM.2024.3522356","DOIUrl":"https://doi.org/10.1109/JTEHM.2024.3522356","url":null,"abstract":"Methamphetamine use disorder (MUD) is a substance use disorder. Because MUD has become more prevalent due to the COVID-19 pandemic, alternative ways to help the efficiency of mass screening of MUD are important. Previous studies used electroencephalogram (EEG), heart rate variability (HRV), and galvanic skin response (GSR) aberrations during the virtual reality (VR) induction of drug craving to accurately separate patients with MUD from the healthy controls. However, whether these abnormalities present without induction of drug-cue reactivity to enable separation between patients and healthy subjects remains unclear. Here, we propose a clinically comparable intelligent system using the fusion of 5–channel EEG, HRV, and GSR data during resting state to aid in detecting MUD. Forty-six patients with MUD and 26 healthy controls were recruited and machine learning methods were employed to systematically compare the classification results of different fusion models. The analytic results revealed that the fusion of HRV and GSR features leads to the most accurate separation rate of 79%. The use of EEG, HRV, and GSR features provides more robust information, leading to relatively similar and enhanced accuracy across different classifiers. In conclusion, we demonstrated that a clinically applicable intelligent system using resting-state EEG, ECG, and GSR features without the induction of drug cue reactivity enhances the detection of MUD. This system is easy to implement in the clinical setting and can save a lot of time on setting up and experimenting while maintaining excellent accuracy to assist in mass screening of MUD.","PeriodicalId":54255,"journal":{"name":"IEEE Journal of Translational Engineering in Health and Medicine-Jtehm","volume":"13 ","pages":"1-8"},"PeriodicalIF":3.7,"publicationDate":"2024-12-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10816093","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142938217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IEEE JOURNAL OF TRANSLATIONAL ENGINEERING IN HEALTH AND MEDICINE Ieee健康与医学转化工程杂志
IF 3.7 3区 医学 Q2 ENGINEERING, BIOMEDICAL Pub Date : 2024-12-13 DOI: 10.1109/JTEHM.2024.3516335
{"title":"IEEE JOURNAL OF TRANSLATIONAL ENGINEERING IN HEALTH AND MEDICINE","authors":"","doi":"10.1109/JTEHM.2024.3516335","DOIUrl":"https://doi.org/10.1109/JTEHM.2024.3516335","url":null,"abstract":"","PeriodicalId":54255,"journal":{"name":"IEEE Journal of Translational Engineering in Health and Medicine-Jtehm","volume":"12 ","pages":"C3-C3"},"PeriodicalIF":3.7,"publicationDate":"2024-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10799104","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142825791","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
>IEEE Journal on Translational Engineering in Medicine and Biology publication information >IEEE 医学与生物学转化工程期刊》出版信息
IF 3.7 3区 医学 Q2 ENGINEERING, BIOMEDICAL Pub Date : 2024-12-13 DOI: 10.1109/JTEHM.2024.3513733
{"title":">IEEE Journal on Translational Engineering in Medicine and Biology publication information","authors":"","doi":"10.1109/JTEHM.2024.3513733","DOIUrl":"https://doi.org/10.1109/JTEHM.2024.3513733","url":null,"abstract":"","PeriodicalId":54255,"journal":{"name":"IEEE Journal of Translational Engineering in Health and Medicine-Jtehm","volume":"12 ","pages":"C2-C2"},"PeriodicalIF":3.7,"publicationDate":"2024-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10799063","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142821245","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Journal of Translational Engineering in Health and Medicine-Jtehm
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1