首页 > 最新文献

IEEE Journal of Translational Engineering in Health and Medicine-Jtehm最新文献

英文 中文
Quantification of Motor Learning in Hand Adjustability Movements: An Evaluation Variable for Discriminant Cognitive Decline
IF 3.7 3区 医学 Q2 ENGINEERING, BIOMEDICAL Pub Date : 2025-02-10 DOI: 10.1109/JTEHM.2025.3540203
Kazuya Toshima;Yu Chokki;Toshiaki Wasaka;Tsukasa Tamaru;Yoshifumi Morita
Objective: Mild cognitive impairment (MCI) is characterized by early symptoms of attentional decline and may be distinguished through motor learning results. A relationship was reported between dexterous hand movements and cognitive function in older adults. Therefore, this study focuses on motor learning involving dexterous hand movements. As motor learning engages two distinct types of attention, external and internal, we aimed to develop an evaluation method that separates these attentional functions within motor learning. The objective of this study was to develop and verify the effectiveness of this evaluation method. The effectiveness was assessed by comparing two motor learning variables between a normal cognitive (NC) and MCI groups. Method: To evaluate motor learning through dexterous hand movements, we utilized the iWakka device. Two types of visual tracking tasks, repeat and random, were designed to evaluate motor learning from different aspects. The tracking errors in both tasks were quantitatively measured, and the initial and final improvement rates during motor learning were defined as the evaluation variables. The study included 28 MCI participants and 40 NC participants, and the effectiveness of the proposed method was verified by comparing results between the groups. Results: The repeat task revealed a significantly lower learning rate in MCI participants (p <0.01). In contrast, no significant difference was observed between MCI and NC participants in the random task (p =0.67). Conclusion: The evaluation method proposed in this study demonstrated the possibility of obtaining evaluation variables that indicate the characteristics of MCI. Clinical Impact: The methods proposed in this work are clinically relevant because the proposed evaluation system can make evaluation variables for discriminating cognitive decline in MCI. That it, the proposed approach can also be used to provide discrimination for cognitive decline in MCI.
目的:轻度认知障碍(MCI)的特点是早期出现注意力下降的症状,并可通过运动学习结果加以区分。有报道称,老年人的灵巧手部动作与认知功能之间存在关系。因此,本研究侧重于涉及灵巧手部动作的运动学习。由于运动学习涉及外部和内部两种不同类型的注意力,我们旨在开发一种评估方法,将运动学习中的这些注意力功能区分开来。本研究的目的是开发并验证这种评估方法的有效性。通过比较正常认知(NC)组和 MCI 组的两个运动学习变量来评估其有效性。研究方法为了通过灵巧的手部动作评估运动学习,我们使用了 iWakka 设备。我们设计了重复和随机两种视觉跟踪任务,以从不同方面评估运动学习。我们对这两种任务中的跟踪误差进行了定量测量,并将运动学习过程中的初始改善率和最终改善率定义为评估变量。研究对象包括 28 名 MCI 参与者和 40 名 NC 参与者,通过比较两组之间的结果验证了所建议方法的有效性。结果显示重复任务显示 MCI 参与者的学习率明显较低(P <0.01)。相反,在随机任务中,MCI 和 NC 参与者之间没有观察到明显差异(P =0.67)。结论本研究提出的评估方法表明,有可能获得显示 MCI 特征的评估变量。临床影响:本研究提出的方法具有临床意义,因为所提出的评价系统可以得出用于判别 MCI 认知功能下降的评价变量。因此,所提出的方法也可用于鉴别 MCI 患者的认知功能衰退。
{"title":"Quantification of Motor Learning in Hand Adjustability Movements: An Evaluation Variable for Discriminant Cognitive Decline","authors":"Kazuya Toshima;Yu Chokki;Toshiaki Wasaka;Tsukasa Tamaru;Yoshifumi Morita","doi":"10.1109/JTEHM.2025.3540203","DOIUrl":"https://doi.org/10.1109/JTEHM.2025.3540203","url":null,"abstract":"Objective: Mild cognitive impairment (MCI) is characterized by early symptoms of attentional decline and may be distinguished through motor learning results. A relationship was reported between dexterous hand movements and cognitive function in older adults. Therefore, this study focuses on motor learning involving dexterous hand movements. As motor learning engages two distinct types of attention, external and internal, we aimed to develop an evaluation method that separates these attentional functions within motor learning. The objective of this study was to develop and verify the effectiveness of this evaluation method. The effectiveness was assessed by comparing two motor learning variables between a normal cognitive (NC) and MCI groups. Method: To evaluate motor learning through dexterous hand movements, we utilized the iWakka device. Two types of visual tracking tasks, repeat and random, were designed to evaluate motor learning from different aspects. The tracking errors in both tasks were quantitatively measured, and the initial and final improvement rates during motor learning were defined as the evaluation variables. The study included 28 MCI participants and 40 NC participants, and the effectiveness of the proposed method was verified by comparing results between the groups. Results: The repeat task revealed a significantly lower learning rate in MCI participants (p <0.01). In contrast, no significant difference was observed between MCI and NC participants in the random task (p =0.67). Conclusion: The evaluation method proposed in this study demonstrated the possibility of obtaining evaluation variables that indicate the characteristics of MCI. Clinical Impact: The methods proposed in this work are clinically relevant because the proposed evaluation system can make evaluation variables for discriminating cognitive decline in MCI. That it, the proposed approach can also be used to provide discrimination for cognitive decline in MCI.","PeriodicalId":54255,"journal":{"name":"IEEE Journal of Translational Engineering in Health and Medicine-Jtehm","volume":"13 ","pages":"75-84"},"PeriodicalIF":3.7,"publicationDate":"2025-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10879071","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143465630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cross-Modal Augmented Transformer for Automated Medical Report Generation
IF 3.7 3区 医学 Q2 ENGINEERING, BIOMEDICAL Pub Date : 2025-01-29 DOI: 10.1109/JTEHM.2025.3536441
Yuhao Tang;Ye Yuan;Fei Tao;Minghao Tang
In clinical practice, interpreting medical images and composing diagnostic reports typically involve significant manual workload. Therefore, an automated report generation framework that mimics a doctor’s diagnosis better meets the requirements of medical scenarios. Prior investigations often overlook this critical aspect, primarily relying on traditional image captioning frameworks initially designed for general-domain images and sentences. Despite achieving some advancements, these methodologies encounter two primary challenges. First, the strong noise in blurred medical images always hinders the model of capturing the lesion region. Second, during report writing, doctors typically rely on terminology for diagnosis, a crucial aspect that has been neglected in prior frameworks. In this paper, we present a novel approach called Cross-modal Augmented Transformer (CAT) for medical report generation. Unlike previous methods that rely on coarse-grained features without human intervention, our method introduces a “locate then generate” pattern, thereby improving the interpretability of the generated reports. During the locate stage, CAT captures crucial representations by pre-aligning significant patches and their corresponding medical terminologies. This pre-alignment helps reduce visual noise by discarding low-ranking content, ensuring that only relevant information is considered in the report generation process. During the generation phase, CAT utilizes a multi-modality encoder to reinforce the correlation between generated keywords, retrieved terminologies and regions. Furthermore, CAT employs a dual-stream decoder that dynamically determines whether the predicted word should be influenced by the retrieved terminology or the preceding sentence. Experimental results demonstrate the effectiveness of the proposed method on two datasets.Clinical impact: This work aims to design an automated framework for explaining medical images to evaluate the health status of individuals, thereby facilitating their broader application in clinical settings.Clinical and Translational Impact Statement: In our preclinical research, we develop an automated system for generating diagnostic reports. This system mimics manual diagnostic methods by combining fine-grained semantic alignment with dual-stream decoders.
{"title":"Cross-Modal Augmented Transformer for Automated Medical Report Generation","authors":"Yuhao Tang;Ye Yuan;Fei Tao;Minghao Tang","doi":"10.1109/JTEHM.2025.3536441","DOIUrl":"https://doi.org/10.1109/JTEHM.2025.3536441","url":null,"abstract":"In clinical practice, interpreting medical images and composing diagnostic reports typically involve significant manual workload. Therefore, an automated report generation framework that mimics a doctor’s diagnosis better meets the requirements of medical scenarios. Prior investigations often overlook this critical aspect, primarily relying on traditional image captioning frameworks initially designed for general-domain images and sentences. Despite achieving some advancements, these methodologies encounter two primary challenges. First, the strong noise in blurred medical images always hinders the model of capturing the lesion region. Second, during report writing, doctors typically rely on terminology for diagnosis, a crucial aspect that has been neglected in prior frameworks. In this paper, we present a novel approach called Cross-modal Augmented Transformer (CAT) for medical report generation. Unlike previous methods that rely on coarse-grained features without human intervention, our method introduces a “locate then generate” pattern, thereby improving the interpretability of the generated reports. During the locate stage, CAT captures crucial representations by pre-aligning significant patches and their corresponding medical terminologies. This pre-alignment helps reduce visual noise by discarding low-ranking content, ensuring that only relevant information is considered in the report generation process. During the generation phase, CAT utilizes a multi-modality encoder to reinforce the correlation between generated keywords, retrieved terminologies and regions. Furthermore, CAT employs a dual-stream decoder that dynamically determines whether the predicted word should be influenced by the retrieved terminology or the preceding sentence. Experimental results demonstrate the effectiveness of the proposed method on two datasets.Clinical impact: This work aims to design an automated framework for explaining medical images to evaluate the health status of individuals, thereby facilitating their broader application in clinical settings.Clinical and Translational Impact Statement: In our preclinical research, we develop an automated system for generating diagnostic reports. This system mimics manual diagnostic methods by combining fine-grained semantic alignment with dual-stream decoders.","PeriodicalId":54255,"journal":{"name":"IEEE Journal of Translational Engineering in Health and Medicine-Jtehm","volume":"13 ","pages":"33-48"},"PeriodicalIF":3.7,"publicationDate":"2025-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10857391","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143379493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-Branch CNN-LSTM Fusion Network-Driven System With BERT Semantic Evaluator for Radiology Reporting in Emergency Head CTs
IF 3.7 3区 医学 Q2 ENGINEERING, BIOMEDICAL Pub Date : 2025-01-28 DOI: 10.1109/JTEHM.2025.3535676
Selene Tomassini;Damiano Duranti;Abdallah Zeggada;Carlo Cosimo Quattrocchi;Farid Melgani;Paolo Giorgini
The high volume of emergency room patients often necessitates head CT examinations to rule out ischemic, hemorrhagic, or other organic pathologies. A system that enhances the diagnostic efficacy of head CT imaging in emergency settings through structured reporting would significantly improve clinical decision making. Currently, no AI solutions address this need. Thus, our research aims to develop an automatic radiology reporting system by directly analyzing brain anomalies in head CT data. We propose a multi-branch CNN-LSTM fusion network-driven system for enhanced radiology reporting in emergency settings. We preprocessed head CT scans by resizing all slices, selecting those with significant variability, and applying PCA to retain 95% of the original data variance, ultimately saving the most representative five slices for each scan. We linked the reports to their respective slice IDs, divided them into individual captions, and preprocessed each. We performed an 80-20 split of the dataset for ten times, with 15% of the training set used for validation. Our model utilizes a pretrained VGG16, processing groups of five slices simultaneously, and features multiple end-to-end LSTM branches, each specialized in predicting one caption, subsequently combined to form the ordered reports after a BERT-based semantic evaluation. Our system demonstrates effectiveness and stability, with the postprocessing stage refining the syntax of the generated descriptions. However, there remains an opportunity to empower the evaluation framework to more accurately assess the clinical relevance of the automatically-written reports. Part of future work will include transitioning to 3D and developing an improved version based on vision-language models.
{"title":"Multi-Branch CNN-LSTM Fusion Network-Driven System With BERT Semantic Evaluator for Radiology Reporting in Emergency Head CTs","authors":"Selene Tomassini;Damiano Duranti;Abdallah Zeggada;Carlo Cosimo Quattrocchi;Farid Melgani;Paolo Giorgini","doi":"10.1109/JTEHM.2025.3535676","DOIUrl":"https://doi.org/10.1109/JTEHM.2025.3535676","url":null,"abstract":"The high volume of emergency room patients often necessitates head CT examinations to rule out ischemic, hemorrhagic, or other organic pathologies. A system that enhances the diagnostic efficacy of head CT imaging in emergency settings through structured reporting would significantly improve clinical decision making. Currently, no AI solutions address this need. Thus, our research aims to develop an automatic radiology reporting system by directly analyzing brain anomalies in head CT data. We propose a multi-branch CNN-LSTM fusion network-driven system for enhanced radiology reporting in emergency settings. We preprocessed head CT scans by resizing all slices, selecting those with significant variability, and applying PCA to retain 95% of the original data variance, ultimately saving the most representative five slices for each scan. We linked the reports to their respective slice IDs, divided them into individual captions, and preprocessed each. We performed an 80-20 split of the dataset for ten times, with 15% of the training set used for validation. Our model utilizes a pretrained VGG16, processing groups of five slices simultaneously, and features multiple end-to-end LSTM branches, each specialized in predicting one caption, subsequently combined to form the ordered reports after a BERT-based semantic evaluation. Our system demonstrates effectiveness and stability, with the postprocessing stage refining the syntax of the generated descriptions. However, there remains an opportunity to empower the evaluation framework to more accurately assess the clinical relevance of the automatically-written reports. Part of future work will include transitioning to 3D and developing an improved version based on vision-language models.","PeriodicalId":54255,"journal":{"name":"IEEE Journal of Translational Engineering in Health and Medicine-Jtehm","volume":"13 ","pages":"61-74"},"PeriodicalIF":3.7,"publicationDate":"2025-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10856282","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143403879","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Intelligent Neonatal Blood Perfusion Assessment System Based on Near-Infrared Spectroscopy
IF 3.7 3区 医学 Q2 ENGINEERING, BIOMEDICAL Pub Date : 2025-01-22 DOI: 10.1109/JTEHM.2025.3532801
Hsiu-Lin Chen;Bor-Shing Lin;Chieh-Miao Chang;Hao-Wei Chung;Shu-Ting Yang;Bor-Shyh Lin
High-risk infants in the neonatal intensive care unit often encounter the problems with hemodynamic instability, and the poor blood circulation may cause shock or other sequelae. But the appearance of shock is not easy to be noticed in the initial stage, and most of the clinical judgments are subjectively dependent on the experienced physicians. Therefore, how to effectively evaluate the neonatal blood circulation state is important for the treatment in time. Although some instruments, such as laser Doppler flow meter, can estimate the information of blood flow, there is still lack of monitoring systems to evaluate the neonatal blood circulation directly. Based on the technique of near-infrared spectroscopy, an intelligent neonatal blood perfusion assessment system was proposed in this study, to monitor the changes of hemoglobin concentration and tissue oxygen saturation simultaneously and further estimate the neonatal blood perfusion. Several indexes were defined from the changes of hemoglobin parameters under applying and relaxing pressure to obtain the neonatal perfusion information. Moreover, the neural network-based classifier was also used to effectively classify the groups with different blood perfusion states. From the experimental results, the difference between the groups with different blood perfusion states could exactly be reflected on several defined indexes and could be effectively recognized by using the technique of neural network. Clinical and Translational Impact Statement—An intelligent neonatal blood perfusion assessment system was proposed to monitor the changes of hemoglobin concentration and tissue oxygen saturation simultaneously and further estimate the neonatal blood perfusion (Category: Preclinical Research)
{"title":"Intelligent Neonatal Blood Perfusion Assessment System Based on Near-Infrared Spectroscopy","authors":"Hsiu-Lin Chen;Bor-Shing Lin;Chieh-Miao Chang;Hao-Wei Chung;Shu-Ting Yang;Bor-Shyh Lin","doi":"10.1109/JTEHM.2025.3532801","DOIUrl":"https://doi.org/10.1109/JTEHM.2025.3532801","url":null,"abstract":"High-risk infants in the neonatal intensive care unit often encounter the problems with hemodynamic instability, and the poor blood circulation may cause shock or other sequelae. But the appearance of shock is not easy to be noticed in the initial stage, and most of the clinical judgments are subjectively dependent on the experienced physicians. Therefore, how to effectively evaluate the neonatal blood circulation state is important for the treatment in time. Although some instruments, such as laser Doppler flow meter, can estimate the information of blood flow, there is still lack of monitoring systems to evaluate the neonatal blood circulation directly. Based on the technique of near-infrared spectroscopy, an intelligent neonatal blood perfusion assessment system was proposed in this study, to monitor the changes of hemoglobin concentration and tissue oxygen saturation simultaneously and further estimate the neonatal blood perfusion. Several indexes were defined from the changes of hemoglobin parameters under applying and relaxing pressure to obtain the neonatal perfusion information. Moreover, the neural network-based classifier was also used to effectively classify the groups with different blood perfusion states. From the experimental results, the difference between the groups with different blood perfusion states could exactly be reflected on several defined indexes and could be effectively recognized by using the technique of neural network. Clinical and Translational Impact Statement—An intelligent neonatal blood perfusion assessment system was proposed to monitor the changes of hemoglobin concentration and tissue oxygen saturation simultaneously and further estimate the neonatal blood perfusion (Category: Preclinical Research)","PeriodicalId":54255,"journal":{"name":"IEEE Journal of Translational Engineering in Health and Medicine-Jtehm","volume":"13 ","pages":"23-32"},"PeriodicalIF":3.7,"publicationDate":"2025-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10849653","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143105528","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Design and Development of an Integrated Virtual Reality (VR)-Based Training System for Difficult Airway Management
IF 3.7 3区 医学 Q2 ENGINEERING, BIOMEDICAL Pub Date : 2025-01-14 DOI: 10.1109/JTEHM.2025.3529748
Saurabh Jain;Bijoy Dripta Barua Chowdhury;Jarrod M. Mosier;Vignesh Subbian;Kate Hughes;Young-Jun Son
For over 40 years, airway management simulation has been a cornerstone of medical training, aiming to reduce procedural risks for critically ill patients. However, existing simulation technologies often lack the versatility and realism needed to replicate the cognitive and physical challenges of complex airway management scenarios. We developed a novel Virtual Reality (VR)-based simulation system designed to enhance immersive airway management training and research. This system integrates physical and virtual environments with an external sensory framework to capture high-fidelity data on user performance. Advanced calibration techniques ensure precise positional tracking and realistic physics-based interactions, providing a cohesive mixed-reality experience. Validation studies conducted in a dedicated medical training center demonstrated the system’s effectiveness in replicating real-world conditions. Positional calibration accuracy was achieved within 0.1 cm, with parameter calibrations showing no significant discrepancies. Validation using Pre- and post-simulation surveys indicated positive feedback on training aspects, perceived usefulness, and ease of use. These results suggest that the system offers a significant improvement in procedural and cognitive training for high-stakes medical environments.
{"title":"Design and Development of an Integrated Virtual Reality (VR)-Based Training System for Difficult Airway Management","authors":"Saurabh Jain;Bijoy Dripta Barua Chowdhury;Jarrod M. Mosier;Vignesh Subbian;Kate Hughes;Young-Jun Son","doi":"10.1109/JTEHM.2025.3529748","DOIUrl":"https://doi.org/10.1109/JTEHM.2025.3529748","url":null,"abstract":"For over 40 years, airway management simulation has been a cornerstone of medical training, aiming to reduce procedural risks for critically ill patients. However, existing simulation technologies often lack the versatility and realism needed to replicate the cognitive and physical challenges of complex airway management scenarios. We developed a novel Virtual Reality (VR)-based simulation system designed to enhance immersive airway management training and research. This system integrates physical and virtual environments with an external sensory framework to capture high-fidelity data on user performance. Advanced calibration techniques ensure precise positional tracking and realistic physics-based interactions, providing a cohesive mixed-reality experience. Validation studies conducted in a dedicated medical training center demonstrated the system’s effectiveness in replicating real-world conditions. Positional calibration accuracy was achieved within 0.1 cm, with parameter calibrations showing no significant discrepancies. Validation using Pre- and post-simulation surveys indicated positive feedback on training aspects, perceived usefulness, and ease of use. These results suggest that the system offers a significant improvement in procedural and cognitive training for high-stakes medical environments.","PeriodicalId":54255,"journal":{"name":"IEEE Journal of Translational Engineering in Health and Medicine-Jtehm","volume":"13 ","pages":"49-60"},"PeriodicalIF":3.7,"publicationDate":"2025-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10841389","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143403882","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fusion Model Using Resting Neurophysiological Data to Help Mass Screening of Methamphetamine Use Disorder 利用静息神经生理数据的融合模型帮助大规模筛查甲基苯丙胺使用障碍
IF 3.7 3区 医学 Q2 ENGINEERING, BIOMEDICAL Pub Date : 2024-12-25 DOI: 10.1109/JTEHM.2024.3522356
Chun-Chuan Chen;Meng-Chang Tsai;Eric Hsiao-Kuang Wu;Shao-Rong Sheng;Jia-Jeng Lee;Yung-En Lu;Shih-Ching Yeh
Methamphetamine use disorder (MUD) is a substance use disorder. Because MUD has become more prevalent due to the COVID-19 pandemic, alternative ways to help the efficiency of mass screening of MUD are important. Previous studies used electroencephalogram (EEG), heart rate variability (HRV), and galvanic skin response (GSR) aberrations during the virtual reality (VR) induction of drug craving to accurately separate patients with MUD from the healthy controls. However, whether these abnormalities present without induction of drug-cue reactivity to enable separation between patients and healthy subjects remains unclear. Here, we propose a clinically comparable intelligent system using the fusion of 5–channel EEG, HRV, and GSR data during resting state to aid in detecting MUD. Forty-six patients with MUD and 26 healthy controls were recruited and machine learning methods were employed to systematically compare the classification results of different fusion models. The analytic results revealed that the fusion of HRV and GSR features leads to the most accurate separation rate of 79%. The use of EEG, HRV, and GSR features provides more robust information, leading to relatively similar and enhanced accuracy across different classifiers. In conclusion, we demonstrated that a clinically applicable intelligent system using resting-state EEG, ECG, and GSR features without the induction of drug cue reactivity enhances the detection of MUD. This system is easy to implement in the clinical setting and can save a lot of time on setting up and experimenting while maintaining excellent accuracy to assist in mass screening of MUD.
甲基苯丙胺使用障碍(Methamphetamine use disorder, MUD)是一种物质使用障碍。由于COVID-19大流行使MUD变得更加普遍,因此提高MUD大规模筛查效率的替代方法非常重要。先前的研究使用虚拟现实(VR)诱导药物渴望时的脑电图(EEG)、心率变异性(HRV)和皮肤电反应(GSR)畸变来准确区分MUD患者和健康对照组。然而,这些异常是否没有引起药物提示反应,从而使患者与健康受试者分离,目前尚不清楚。在这里,我们提出了一种临床可比较的智能系统,该系统使用静息状态下的5通道EEG, HRV和GSR数据融合来帮助检测MUD。选取46例MUD患者和26例健康对照者,采用机器学习方法系统比较不同融合模型的分类结果。分析结果表明,HRV和GSR特征的融合使分离准确率达到79%。EEG、HRV和GSR特征的使用提供了更鲁棒的信息,导致不同分类器之间相对相似和提高的准确性。总之,我们证明了一个临床适用的智能系统,利用静息状态EEG, ECG和GSR特征,而不诱导药物线索反应性,可以增强对MUD的检测。该系统易于在临床环境中实施,可以节省大量的设置和实验时间,同时保持良好的准确性,以协助大规模筛查MUD。
{"title":"Fusion Model Using Resting Neurophysiological Data to Help Mass Screening of Methamphetamine Use Disorder","authors":"Chun-Chuan Chen;Meng-Chang Tsai;Eric Hsiao-Kuang Wu;Shao-Rong Sheng;Jia-Jeng Lee;Yung-En Lu;Shih-Ching Yeh","doi":"10.1109/JTEHM.2024.3522356","DOIUrl":"https://doi.org/10.1109/JTEHM.2024.3522356","url":null,"abstract":"Methamphetamine use disorder (MUD) is a substance use disorder. Because MUD has become more prevalent due to the COVID-19 pandemic, alternative ways to help the efficiency of mass screening of MUD are important. Previous studies used electroencephalogram (EEG), heart rate variability (HRV), and galvanic skin response (GSR) aberrations during the virtual reality (VR) induction of drug craving to accurately separate patients with MUD from the healthy controls. However, whether these abnormalities present without induction of drug-cue reactivity to enable separation between patients and healthy subjects remains unclear. Here, we propose a clinically comparable intelligent system using the fusion of 5–channel EEG, HRV, and GSR data during resting state to aid in detecting MUD. Forty-six patients with MUD and 26 healthy controls were recruited and machine learning methods were employed to systematically compare the classification results of different fusion models. The analytic results revealed that the fusion of HRV and GSR features leads to the most accurate separation rate of 79%. The use of EEG, HRV, and GSR features provides more robust information, leading to relatively similar and enhanced accuracy across different classifiers. In conclusion, we demonstrated that a clinically applicable intelligent system using resting-state EEG, ECG, and GSR features without the induction of drug cue reactivity enhances the detection of MUD. This system is easy to implement in the clinical setting and can save a lot of time on setting up and experimenting while maintaining excellent accuracy to assist in mass screening of MUD.","PeriodicalId":54255,"journal":{"name":"IEEE Journal of Translational Engineering in Health and Medicine-Jtehm","volume":"13 ","pages":"1-8"},"PeriodicalIF":3.7,"publicationDate":"2024-12-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10816093","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142938217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IEEE JOURNAL OF TRANSLATIONAL ENGINEERING IN HEALTH AND MEDICINE Ieee健康与医学转化工程杂志
IF 3.7 3区 医学 Q2 ENGINEERING, BIOMEDICAL Pub Date : 2024-12-13 DOI: 10.1109/JTEHM.2024.3516335
{"title":"IEEE JOURNAL OF TRANSLATIONAL ENGINEERING IN HEALTH AND MEDICINE","authors":"","doi":"10.1109/JTEHM.2024.3516335","DOIUrl":"https://doi.org/10.1109/JTEHM.2024.3516335","url":null,"abstract":"","PeriodicalId":54255,"journal":{"name":"IEEE Journal of Translational Engineering in Health and Medicine-Jtehm","volume":"12 ","pages":"C3-C3"},"PeriodicalIF":3.7,"publicationDate":"2024-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10799104","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142825791","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
>IEEE Journal on Translational Engineering in Medicine and Biology publication information >IEEE 医学与生物学转化工程期刊》出版信息
IF 3.7 3区 医学 Q2 ENGINEERING, BIOMEDICAL Pub Date : 2024-12-13 DOI: 10.1109/JTEHM.2024.3513733
{"title":">IEEE Journal on Translational Engineering in Medicine and Biology publication information","authors":"","doi":"10.1109/JTEHM.2024.3513733","DOIUrl":"https://doi.org/10.1109/JTEHM.2024.3513733","url":null,"abstract":"","PeriodicalId":54255,"journal":{"name":"IEEE Journal of Translational Engineering in Health and Medicine-Jtehm","volume":"12 ","pages":"C2-C2"},"PeriodicalIF":3.7,"publicationDate":"2024-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10799063","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142821245","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
List of Reviewers 审稿人名单
IF 3.7 3区 医学 Q2 ENGINEERING, BIOMEDICAL Pub Date : 2024-12-11 DOI: 10.1109/JTEHM.2024.3507892
{"title":"List of Reviewers","authors":"","doi":"10.1109/JTEHM.2024.3507892","DOIUrl":"https://doi.org/10.1109/JTEHM.2024.3507892","url":null,"abstract":"","PeriodicalId":54255,"journal":{"name":"IEEE Journal of Translational Engineering in Health and Medicine-Jtehm","volume":"12 ","pages":"739-739"},"PeriodicalIF":3.7,"publicationDate":"2024-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10794571","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142810324","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Antidepressant Treatment Response Prediction With Early Assessment of Functional Near-Infrared Spectroscopy and Micro-RNA 功能近红外光谱和微rna早期评估抗抑郁药物治疗反应预测
IF 3.7 3区 医学 Q2 ENGINEERING, BIOMEDICAL Pub Date : 2024-11-26 DOI: 10.1109/JTEHM.2024.3506556
Lok Hua Lee;Cyrus Su Hui Ho;Yee Ling Chan;Gabrielle Wann Nii Tay;Cheng-Kai Lu;Tong Boon Tang
While functional near-infrared spectroscopy (fNIRS) had previously been suggested for major depressive disorder (MDD) diagnosis, the clinical application to predict antidepressant treatment response (ATR) is still unclear. To address this, the aim of the current study is to investigate MDD ATR at three response levels using fNIRS and micro-ribonucleic acids (miRNAs). Our proposed algorithm includes a custom inter-subject variability reduction based on the principal component analysis (PCA). The principal components of extracted features are first identified for non-responders’ group. The first few components that sum up to 99% of explained variance are discarded to minimize inter-subject variability while the remaining projection vectors are applied on all response groups (24 non-responders, 15 partial-responders, 13 responders) to obtain their relative projections in feature space. The entire algorithm achieved a better performance through the radial basis function (RBF) support vector machine (SVM), with 82.70% accuracy, 78.44% sensitivity, 86.15% precision, and 91.02% specificity, respectively, when compared with conventional machine learning approaches that combine clinical, sociodemographic and genetic information as the predictor. The performance of the proposed custom algorithm suggests the prediction of ATR can be improved with multiple features sources, provided that the inter-subject variability is properly addressed, and can be an effective tool for clinical decision support system in MDD ATR prediction. Clinical and Translational Impact Statement—The fusion of neuroimaging fNIRS features and miRNA profiles significantly enhances the prediction accuracy of MDD ATR. The minimally required features also make the personalized medicine more practical and realizable
虽然功能性近红外光谱(fNIRS)先前已被建议用于重度抑郁症(MDD)的诊断,但在预测抗抑郁药物治疗反应(ATR)方面的临床应用仍不清楚。为了解决这个问题,本研究的目的是利用近红外光谱和微核糖核酸(mirna)在三个反应水平上研究MDD ATR。我们提出的算法包括基于主成分分析(PCA)的自定义主题间变异性减少。首先对无反应组进行特征提取的主成分识别。为了最大限度地减少受试者间的可变性,将前几个合计占解释方差99%的分量丢弃,而将剩余的投影向量应用于所有反应组(24个无反应者,15个部分反应者,13个反应者),以获得它们在特征空间中的相对投影。整个算法通过径向基函数(RBF)支持向量机(SVM)获得了更好的性能,与结合临床、社会人口学和遗传信息作为预测因子的传统机器学习方法相比,准确率为82.70%,灵敏度为78.44%,精度为86.15%,特异性为91.02%。所提出的自定义算法的性能表明,在适当处理受试者间可变性的情况下,可以使用多个特征源改进ATR的预测,并且可以成为临床决策支持系统在MDD ATR预测中的有效工具。临床和转化影响声明-神经影像学fNIRS特征和miRNA谱的融合显著提高了MDD ATR的预测准确性。最低要求的功能也使个性化医疗更具实用性和可实现性
{"title":"Antidepressant Treatment Response Prediction With Early Assessment of Functional Near-Infrared Spectroscopy and Micro-RNA","authors":"Lok Hua Lee;Cyrus Su Hui Ho;Yee Ling Chan;Gabrielle Wann Nii Tay;Cheng-Kai Lu;Tong Boon Tang","doi":"10.1109/JTEHM.2024.3506556","DOIUrl":"https://doi.org/10.1109/JTEHM.2024.3506556","url":null,"abstract":"While functional near-infrared spectroscopy (fNIRS) had previously been suggested for major depressive disorder (MDD) diagnosis, the clinical application to predict antidepressant treatment response (ATR) is still unclear. To address this, the aim of the current study is to investigate MDD ATR at three response levels using fNIRS and micro-ribonucleic acids (miRNAs). Our proposed algorithm includes a custom inter-subject variability reduction based on the principal component analysis (PCA). The principal components of extracted features are first identified for non-responders’ group. The first few components that sum up to 99% of explained variance are discarded to minimize inter-subject variability while the remaining projection vectors are applied on all response groups (24 non-responders, 15 partial-responders, 13 responders) to obtain their relative projections in feature space. The entire algorithm achieved a better performance through the radial basis function (RBF) support vector machine (SVM), with 82.70% accuracy, 78.44% sensitivity, 86.15% precision, and 91.02% specificity, respectively, when compared with conventional machine learning approaches that combine clinical, sociodemographic and genetic information as the predictor. The performance of the proposed custom algorithm suggests the prediction of ATR can be improved with multiple features sources, provided that the inter-subject variability is properly addressed, and can be an effective tool for clinical decision support system in MDD ATR prediction. Clinical and Translational Impact Statement—The fusion of neuroimaging fNIRS features and miRNA profiles significantly enhances the prediction accuracy of MDD ATR. The minimally required features also make the personalized medicine more practical and realizable","PeriodicalId":54255,"journal":{"name":"IEEE Journal of Translational Engineering in Health and Medicine-Jtehm","volume":"13 ","pages":"9-22"},"PeriodicalIF":3.7,"publicationDate":"2024-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10767732","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142993375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Journal of Translational Engineering in Health and Medicine-Jtehm
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1