首页 > 最新文献

IEEE Journal of Biomedical and Health Informatics最新文献

英文 中文
Predicting Longitudinal Visual Field Progression with Class Imbalanced Data.
IF 6.7 2区 医学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-03-03 DOI: 10.1109/JBHI.2025.3547346
Ling Chen, Chun-Hung Chen, Wei Wang, Da-Wen Lu, Vincent S Tseng

Glaucoma is the leading cause of irreversible blindness worldwide. The clinical standard for glaucoma diagnosis and progression tracking remains visual field (VF) testing via standard automated perimetry. One outstanding challenge of many ophthalmic prediction tasks is the issue of class imbalance, where the majority class outnumbers the minority class(es). Although this issue has been reported in several prior studies on the prediction of VF progression or glaucoma, it has not been addressed in the context of longitudinal VF data. In this work, we proposed, VF-Transformer, a transformer-based framework for VF progression prediction based on longitudinal VF examination results. In particular, we addressed the class imbalance issue by incorporating our proposed inverted class-dependent temperature (ICDT) loss and weight normalization. The proposed framework was developed and evaluated on a public VF dataset and further validated on an external hospital dataset, using accuracy, sensitivity, specificity, and area under the receiver operating characteristic curve (AUC) as evaluation metrics. Extensive experiments and comparisons with existing state-of-the-art methods and class imbalance handling strategies confirmed the effectiveness of the proposed framework in predicting VF progression in the presence of class imbalance.

青光眼是导致全球不可逆失明的主要原因。青光眼诊断和进展跟踪的临床标准仍然是通过标准自动周边测量法进行视野(VF)测试。许多眼科预测任务面临的一个突出挑战是类别失衡问题,即多数类别超过少数类别。虽然这一问题在之前的几项关于 VF 进展或青光眼预测的研究中已有报道,但在纵向 VF 数据的背景下,这一问题尚未得到解决。在这项工作中,我们提出了一个基于变压器的 VF-Transformer 框架,用于根据纵向 VF 检查结果预测 VF 进展。特别是,我们通过结合我们提出的倒置类依赖温度(ICDT)损失和权重归一化,解决了类不平衡问题。我们使用准确度、灵敏度、特异性和接收者工作特征曲线下面积(AUC)作为评估指标,在公共 VF 数据集上开发并评估了所提出的框架,并在外部医院数据集上进一步验证了该框架。广泛的实验以及与现有最先进方法和类失衡处理策略的比较证实了所提出的框架在类失衡情况下预测心房颤动进展的有效性。
{"title":"Predicting Longitudinal Visual Field Progression with Class Imbalanced Data.","authors":"Ling Chen, Chun-Hung Chen, Wei Wang, Da-Wen Lu, Vincent S Tseng","doi":"10.1109/JBHI.2025.3547346","DOIUrl":"https://doi.org/10.1109/JBHI.2025.3547346","url":null,"abstract":"<p><p>Glaucoma is the leading cause of irreversible blindness worldwide. The clinical standard for glaucoma diagnosis and progression tracking remains visual field (VF) testing via standard automated perimetry. One outstanding challenge of many ophthalmic prediction tasks is the issue of class imbalance, where the majority class outnumbers the minority class(es). Although this issue has been reported in several prior studies on the prediction of VF progression or glaucoma, it has not been addressed in the context of longitudinal VF data. In this work, we proposed, VF-Transformer, a transformer-based framework for VF progression prediction based on longitudinal VF examination results. In particular, we addressed the class imbalance issue by incorporating our proposed inverted class-dependent temperature (ICDT) loss and weight normalization. The proposed framework was developed and evaluated on a public VF dataset and further validated on an external hospital dataset, using accuracy, sensitivity, specificity, and area under the receiver operating characteristic curve (AUC) as evaluation metrics. Extensive experiments and comparisons with existing state-of-the-art methods and class imbalance handling strategies confirmed the effectiveness of the proposed framework in predicting VF progression in the presence of class imbalance.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2025-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143541577","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Semi-supervised Gland Segmentation via Feature-enhanced Contrastive Learning and Dual-consistency Strategy.
IF 6.7 2区 医学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-02-28 DOI: 10.1109/JBHI.2025.3546698
Jiejiang Yu, Bingbing Li, Xipeng Pan, Zhenwei Shi, Huadeng Wang, Rushi Lan, Xiaonan Luo

In the field of gland segmentation in histopathology, deep-learning methods have made significant progress. However, most existing methods not only require a large amount of high-quality annotated data but also tend to confuse the internal of the gland with the background. To address this challenge, we propose a new semi-supervised method named DCCL-Seg for gland segmentation, which follows the teacher-student framework. Our approach can be divided into follows steps. First, we design a contrastive learning module to improve the ability of the student model's feature extractor to distinguish between gland and background features. Then, we introduce a Signed Distance Field (SDF) prediction task and employ dual-consistency strategy (across tasks and models) to better reinforce the learning of gland internal. Next, we proposed a pseudo label filtering and reweighting mechanism, which filters and reweights the pseudo labels generated by the teacher model based on confidence. However, even after reweighting, the pseudo labels may still be influenced by unreliable pixels. Finally, we further designed an assistant predictor to learn the reweighted pseudo labels, which do not interfere with the student model's predictor and ensure the reliability of the student model's predictions. Experimental results on the publicly available GlaS and CRAG datasets demonstrate that our method outperforms other semi-supervised medical image segmentation methods.

{"title":"Semi-supervised Gland Segmentation via Feature-enhanced Contrastive Learning and Dual-consistency Strategy.","authors":"Jiejiang Yu, Bingbing Li, Xipeng Pan, Zhenwei Shi, Huadeng Wang, Rushi Lan, Xiaonan Luo","doi":"10.1109/JBHI.2025.3546698","DOIUrl":"10.1109/JBHI.2025.3546698","url":null,"abstract":"<p><p>In the field of gland segmentation in histopathology, deep-learning methods have made significant progress. However, most existing methods not only require a large amount of high-quality annotated data but also tend to confuse the internal of the gland with the background. To address this challenge, we propose a new semi-supervised method named DCCL-Seg for gland segmentation, which follows the teacher-student framework. Our approach can be divided into follows steps. First, we design a contrastive learning module to improve the ability of the student model's feature extractor to distinguish between gland and background features. Then, we introduce a Signed Distance Field (SDF) prediction task and employ dual-consistency strategy (across tasks and models) to better reinforce the learning of gland internal. Next, we proposed a pseudo label filtering and reweighting mechanism, which filters and reweights the pseudo labels generated by the teacher model based on confidence. However, even after reweighting, the pseudo labels may still be influenced by unreliable pixels. Finally, we further designed an assistant predictor to learn the reweighted pseudo labels, which do not interfere with the student model's predictor and ensure the reliability of the student model's predictions. Experimental results on the publicly available GlaS and CRAG datasets demonstrate that our method outperforms other semi-supervised medical image segmentation methods.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2025-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143541522","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Self-supervised Learning for Drug Discovery Using Nematode Images: Method and Dataset.
IF 6.7 2区 医学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-02-28 DOI: 10.1109/JBHI.2025.3546603
Lyuyang Wang, Sommer Chou, Mehrdad Eshraghi Dehaghani, Gerry Wright, Lesley MacNeil, Mehdi Moradi

Parasitic worms are significant causes of human and livestock disease. The battle against infections caused by parasitic worms involves the exploration of numerous potential drug candidates. One approach in screening for new drug candidates is using natural product extracts on the nematode C. elegans as a model organism. A critical step in this process is the examination of microscopy images of C. elegans after exposure to natural product extracts. Automatic image classification accelerates the analysis process compared to purely visual identification by an expert. We report a new C. elegans image dataset includes 12,717 microscopy images corresponding to natural product extracts, with about one-third of the images labeled by an expert and the remaining unlabeled. We make this dataset available to researchers for further development. We also propose a two-stage Semi-supervised Mix-up Barlow Twins Nematode Classifier (MBT-NC) to solve three image classification tasks involving nematode phenotypes after exposure to the studied natural extracts. MBT-NC combines self-supervised learning (SSL) for the feature representation stage (MBT) with a supervised classification stage (NC). In MBT, we utilize augmented and linearly interpolated samples for information maximization. Our method outperforms fully supervised and also other self-supervised methods on all three classification tasks: For binary, six-class, and 27-class classification, we outperform by 3.2%, 1.0%, and 2.2% respectively on test accuracy compared to the other methods. This is a new line of research in computer vision applications in healthcare. We have made this data public and users can obtain access through a simple request https://docs.google.com/forms/d/e/1FAIpQLSc0kb3mbMvfrLEAhBAoMbbbNkvNyf1Qf7nyOCSHfTqs0eEb3w/viewform.

{"title":"Self-supervised Learning for Drug Discovery Using Nematode Images: Method and Dataset.","authors":"Lyuyang Wang, Sommer Chou, Mehrdad Eshraghi Dehaghani, Gerry Wright, Lesley MacNeil, Mehdi Moradi","doi":"10.1109/JBHI.2025.3546603","DOIUrl":"https://doi.org/10.1109/JBHI.2025.3546603","url":null,"abstract":"<p><p>Parasitic worms are significant causes of human and livestock disease. The battle against infections caused by parasitic worms involves the exploration of numerous potential drug candidates. One approach in screening for new drug candidates is using natural product extracts on the nematode C. elegans as a model organism. A critical step in this process is the examination of microscopy images of C. elegans after exposure to natural product extracts. Automatic image classification accelerates the analysis process compared to purely visual identification by an expert. We report a new C. elegans image dataset includes 12,717 microscopy images corresponding to natural product extracts, with about one-third of the images labeled by an expert and the remaining unlabeled. We make this dataset available to researchers for further development. We also propose a two-stage Semi-supervised Mix-up Barlow Twins Nematode Classifier (MBT-NC) to solve three image classification tasks involving nematode phenotypes after exposure to the studied natural extracts. MBT-NC combines self-supervised learning (SSL) for the feature representation stage (MBT) with a supervised classification stage (NC). In MBT, we utilize augmented and linearly interpolated samples for information maximization. Our method outperforms fully supervised and also other self-supervised methods on all three classification tasks: For binary, six-class, and 27-class classification, we outperform by 3.2%, 1.0%, and 2.2% respectively on test accuracy compared to the other methods. This is a new line of research in computer vision applications in healthcare. We have made this data public and users can obtain access through a simple request https://docs.google.com/forms/d/e/1FAIpQLSc0kb3mbMvfrLEAhBAoMbbbNkvNyf1Qf7nyOCSHfTqs0eEb3w/viewform.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2025-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143541351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DMSACNN: Deep Multiscale Attentional Convolutional Neural Network for EEG-Based Motor Decoding.
IF 6.7 2区 医学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-02-27 DOI: 10.1109/JBHI.2025.3546288
Ke Liu, Xin Xing, Tao Yang, Zhuliang Yu, Bin Xiao, Guoyin Wang, Wei Wu

Objective: Accurate decoding of electroencephalogram (EEG) signals has become more significant for the brain-computer interface (BCI). Specifically, motor imagery and motor execution (MI/ME) tasks enable the control of external devices by decoding EEG signals during imagined or real movements. However, accurately decoding MI/ME signals remains a challenge due to the limited utilization of temporal information and ineffective feature selection methods.

Methods: This paper introduces DMSACNN, an end-to-end deep multiscale attention convolutional neural network for MI/ME-EEG decoding. DMSACNN incorporates a deep multiscale temporal feature extraction module to capture temporal features at various levels. These features are then processed by a spatial convolutional module to extract spatial features. Finally, a local and global feature fusion attention module is utilized to combine local and global information and extract the most discriminative spatiotemporal features.

Main results: DMSACNN achieves impressive accuracies of 78.20%, 96.34% and 70.90% for hold-out analysis on the BCI-IV-2a, High Gamma and OpenBMI datasets, respectively, outperforming most of the state-of-the-art methods.

Conclusion and significance: These results highlight the potential of DMSACNN in robust BCI applications. Our proposed method provides a valuable solution to improve the accuracy of the MI/ME-EEG decoding, which can pave the way for more efficient and reliable BCI systems. The source code for DMSACNN is available at https://github.com/xingxin-99/DMSANet.git.

{"title":"DMSACNN: Deep Multiscale Attentional Convolutional Neural Network for EEG-Based Motor Decoding.","authors":"Ke Liu, Xin Xing, Tao Yang, Zhuliang Yu, Bin Xiao, Guoyin Wang, Wei Wu","doi":"10.1109/JBHI.2025.3546288","DOIUrl":"https://doi.org/10.1109/JBHI.2025.3546288","url":null,"abstract":"<p><strong>Objective: </strong>Accurate decoding of electroencephalogram (EEG) signals has become more significant for the brain-computer interface (BCI). Specifically, motor imagery and motor execution (MI/ME) tasks enable the control of external devices by decoding EEG signals during imagined or real movements. However, accurately decoding MI/ME signals remains a challenge due to the limited utilization of temporal information and ineffective feature selection methods.</p><p><strong>Methods: </strong>This paper introduces DMSACNN, an end-to-end deep multiscale attention convolutional neural network for MI/ME-EEG decoding. DMSACNN incorporates a deep multiscale temporal feature extraction module to capture temporal features at various levels. These features are then processed by a spatial convolutional module to extract spatial features. Finally, a local and global feature fusion attention module is utilized to combine local and global information and extract the most discriminative spatiotemporal features.</p><p><strong>Main results: </strong>DMSACNN achieves impressive accuracies of 78.20%, 96.34% and 70.90% for hold-out analysis on the BCI-IV-2a, High Gamma and OpenBMI datasets, respectively, outperforming most of the state-of-the-art methods.</p><p><strong>Conclusion and significance: </strong>These results highlight the potential of DMSACNN in robust BCI applications. Our proposed method provides a valuable solution to improve the accuracy of the MI/ME-EEG decoding, which can pave the way for more efficient and reliable BCI systems. The source code for DMSACNN is available at https://github.com/xingxin-99/DMSANet.git.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2025-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143541563","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
EMGANet: Edge-Aware Multi-Scale Group-Mix Attention Network for Breast Cancer Ultrasound Image Segmentation.
IF 6.7 2区 医学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-02-27 DOI: 10.1109/JBHI.2025.3546345
Jin Huang, Yazhao Mao, Jingwen Deng, Zhaoyi Ye, Yimin Zhang, Jingwen Zhang, Lan Dong, Hui Shen, Jinxuan Hou, Yu Xu, Xiaoxiao Li, Sheng Liu, Du Wang, Shengrong Sun, Liye Mei, Cheng Lei

Breast cancer is one of the most prevalent diseases for women worldwide. Early and accurate ultrasound image segmentation plays a crucial role in reducing mortality. Although deep learning methods have demonstrated remarkable segmentation potential, they still struggle with challenges in ultrasound images, including blurred boundaries and speckle noise. To generate accurate ultrasound image segmentation, this paper proposes the Edge-Aware Multi-Scale Group-Mix Attention Network (EMGANet), which generates accurate segmentation by integrating deep and edge features. The Multi-Scale Group Mix Attention block effectively aggregates both sparse global and local features, ensuring the extraction of valuable information. The subsequent Edge Feature Enhancement block then focuses on cancer boundaries, enhancing the segmentation accuracy. Therefore, EMGANet effectively tackles unclear boundaries and noise in ultrasound images. We conduct experiments on two public datasets (Dataset- B, BUSI) and one private dataset which contains 927 samples from Renmin Hospital of Wuhan University (BUSIWHU). EMGANet demonstrates superior segmentation performance, achieving an overall accuracy (OA) of 98.56%, a mean IoU (mIoU) of 90.32%, and an ASSD of 6.1 pixels on the BUSI-WHU dataset. Additionally, EMGANet performs well on two public datasets, with a mIoU of 88.2% and an ASSD of 9.2 pixels on Dataset-B, and a mIoU of 81.37% and an ASSD of 18.27 pixels on the BUSI dataset. EMGANet achieves a state-of-the-art segmentation performance of about 2% in mIoU across three datasets. In summary, the proposed EMGANet significantly improves breast cancer segmentation through Edge-Aware and Group-Mix Attention mechanisms, showing great potential for clinical applications.

{"title":"EMGANet: Edge-Aware Multi-Scale Group-Mix Attention Network for Breast Cancer Ultrasound Image Segmentation.","authors":"Jin Huang, Yazhao Mao, Jingwen Deng, Zhaoyi Ye, Yimin Zhang, Jingwen Zhang, Lan Dong, Hui Shen, Jinxuan Hou, Yu Xu, Xiaoxiao Li, Sheng Liu, Du Wang, Shengrong Sun, Liye Mei, Cheng Lei","doi":"10.1109/JBHI.2025.3546345","DOIUrl":"https://doi.org/10.1109/JBHI.2025.3546345","url":null,"abstract":"<p><p>Breast cancer is one of the most prevalent diseases for women worldwide. Early and accurate ultrasound image segmentation plays a crucial role in reducing mortality. Although deep learning methods have demonstrated remarkable segmentation potential, they still struggle with challenges in ultrasound images, including blurred boundaries and speckle noise. To generate accurate ultrasound image segmentation, this paper proposes the Edge-Aware Multi-Scale Group-Mix Attention Network (EMGANet), which generates accurate segmentation by integrating deep and edge features. The Multi-Scale Group Mix Attention block effectively aggregates both sparse global and local features, ensuring the extraction of valuable information. The subsequent Edge Feature Enhancement block then focuses on cancer boundaries, enhancing the segmentation accuracy. Therefore, EMGANet effectively tackles unclear boundaries and noise in ultrasound images. We conduct experiments on two public datasets (Dataset- B, BUSI) and one private dataset which contains 927 samples from Renmin Hospital of Wuhan University (BUSIWHU). EMGANet demonstrates superior segmentation performance, achieving an overall accuracy (OA) of 98.56%, a mean IoU (mIoU) of 90.32%, and an ASSD of 6.1 pixels on the BUSI-WHU dataset. Additionally, EMGANet performs well on two public datasets, with a mIoU of 88.2% and an ASSD of 9.2 pixels on Dataset-B, and a mIoU of 81.37% and an ASSD of 18.27 pixels on the BUSI dataset. EMGANet achieves a state-of-the-art segmentation performance of about 2% in mIoU across three datasets. In summary, the proposed EMGANet significantly improves breast cancer segmentation through Edge-Aware and Group-Mix Attention mechanisms, showing great potential for clinical applications.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2025-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143541592","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Respiratory Anomaly and Disease Detection Using Multi-Level Temporal Convolutional Networks.
IF 6.7 2区 医学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-02-26 DOI: 10.1109/JBHI.2025.3545156
Kim-Ngoc T Le, Gyurin Byun, Syed M Raza, Duc-Tai Le, Hyunseung Choo

An automated analysis of respiratory sounds using Deep Learning (DL) plays a pivotal role in the early detection of lung diseases. However, current DL methods often examine the spatial and temporal characteristics of respiratory sounds in isolation, which inherently limit their potential. This study proposes a novel DL framework that captures spatial features through convolution operations and exploits the spatiotemporal correlations of these features using temporal convolution networks. The proposed framework incorporates Multi-Level Temporal Convolutional Networks (ML-TCN) to considerably enhance the model accuracy in detecting anomaly breathing cycles and respiratory recordings from lung sound audio. Moreover, a transfer learning technique is also employed to extract semantic features efficiently from limited and imbalanced data in this domain. Thorough experiments on the well-known ICBHI 2017 challenge dataset show that the proposed framework outperforms state-of-the-art methods in both binary and multi-class classification tasks for respiratory anomaly and disease detection. In particular, improvements of up to 2.29% and 2.27% in terms of the Score metric, average sensitivity and specificity, are demonstrated in binary and multi-class anomaly breathing cycle detection tasks, respectively. In respiratory recording classification tasks, the classification accuracy is improved by 2.69% for healthy-unhealthy binary classification and 1.47% for healthy, chronic, and non-chronic diagnosis. These results highlight the marked advantage of the ML-TCN over existing techniques, showcasing its potential to drive future innovations in respiratory healthcare technology.

{"title":"Respiratory Anomaly and Disease Detection Using Multi-Level Temporal Convolutional Networks.","authors":"Kim-Ngoc T Le, Gyurin Byun, Syed M Raza, Duc-Tai Le, Hyunseung Choo","doi":"10.1109/JBHI.2025.3545156","DOIUrl":"https://doi.org/10.1109/JBHI.2025.3545156","url":null,"abstract":"<p><p>An automated analysis of respiratory sounds using Deep Learning (DL) plays a pivotal role in the early detection of lung diseases. However, current DL methods often examine the spatial and temporal characteristics of respiratory sounds in isolation, which inherently limit their potential. This study proposes a novel DL framework that captures spatial features through convolution operations and exploits the spatiotemporal correlations of these features using temporal convolution networks. The proposed framework incorporates Multi-Level Temporal Convolutional Networks (ML-TCN) to considerably enhance the model accuracy in detecting anomaly breathing cycles and respiratory recordings from lung sound audio. Moreover, a transfer learning technique is also employed to extract semantic features efficiently from limited and imbalanced data in this domain. Thorough experiments on the well-known ICBHI 2017 challenge dataset show that the proposed framework outperforms state-of-the-art methods in both binary and multi-class classification tasks for respiratory anomaly and disease detection. In particular, improvements of up to 2.29% and 2.27% in terms of the Score metric, average sensitivity and specificity, are demonstrated in binary and multi-class anomaly breathing cycle detection tasks, respectively. In respiratory recording classification tasks, the classification accuracy is improved by 2.69% for healthy-unhealthy binary classification and 1.47% for healthy, chronic, and non-chronic diagnosis. These results highlight the marked advantage of the ML-TCN over existing techniques, showcasing its potential to drive future innovations in respiratory healthcare technology.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2025-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143541031","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Decoding Arm Movement Direction Using Ultra-High-Density EEG.
IF 6.7 2区 医学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-02-26 DOI: 10.1109/JBHI.2025.3545856
Zhen Ma, Xinyi Yang, Jiayuan Meng, Kun Wang, Minpeng Xu, Dong Ming

Detecting arm movement direction is significant for individuals with upper-limb motor disabilities to restore independent self-care abilities. It involves accurately decoding the fine movement patterns of the arm, which has become feasible using invasive brain-computer interfaces (BCIs). However, it is still a significant challenge for traditional electroencephalography (EEG) based BCIs to decode multi-directional arm movements effectively. This study designed an ultra-high-density (UHD) EEG system to decode multi-directional arm movements. The system contains 200 electrodes with an interval of about 4 mm. We analyzed the patterns of the UHD EEG signals induced by arm movements in different directions. To extract discriminative features from UHD EEG, we proposed a spatial filtering method combining principal component analysis (PCA) and discriminative spatial pattern (DSP). We collected EEG signals from five healthy subjects (two left-handed and three right-handed) to verify the system's feasibility. The movement-related cortical potentials (MRCPs) showed a certain degree of separability both in waveforms and spatial patterns for arm movements in different directions. This study achieved an average classification accuracy of 63.15 (8.71)% for both arms (eight-class task) with a peak accuracy of 77.24%. For the dominant arm (four-class task), we obtained an average accuracy of 75.31 (9.21)% with a peak accuracy of 85.00%. For the first time, this study simultaneously decodes multi-directional movements of both arms using UHD EEG. This study provides a promising approach for detecting information about arm movement directions, which is significant for the development of BCIs.

{"title":"Decoding Arm Movement Direction Using Ultra-High-Density EEG.","authors":"Zhen Ma, Xinyi Yang, Jiayuan Meng, Kun Wang, Minpeng Xu, Dong Ming","doi":"10.1109/JBHI.2025.3545856","DOIUrl":"https://doi.org/10.1109/JBHI.2025.3545856","url":null,"abstract":"<p><p>Detecting arm movement direction is significant for individuals with upper-limb motor disabilities to restore independent self-care abilities. It involves accurately decoding the fine movement patterns of the arm, which has become feasible using invasive brain-computer interfaces (BCIs). However, it is still a significant challenge for traditional electroencephalography (EEG) based BCIs to decode multi-directional arm movements effectively. This study designed an ultra-high-density (UHD) EEG system to decode multi-directional arm movements. The system contains 200 electrodes with an interval of about 4 mm. We analyzed the patterns of the UHD EEG signals induced by arm movements in different directions. To extract discriminative features from UHD EEG, we proposed a spatial filtering method combining principal component analysis (PCA) and discriminative spatial pattern (DSP). We collected EEG signals from five healthy subjects (two left-handed and three right-handed) to verify the system's feasibility. The movement-related cortical potentials (MRCPs) showed a certain degree of separability both in waveforms and spatial patterns for arm movements in different directions. This study achieved an average classification accuracy of 63.15 (8.71)% for both arms (eight-class task) with a peak accuracy of 77.24%. For the dominant arm (four-class task), we obtained an average accuracy of 75.31 (9.21)% with a peak accuracy of 85.00%. For the first time, this study simultaneously decodes multi-directional movements of both arms using UHD EEG. This study provides a promising approach for detecting information about arm movement directions, which is significant for the development of BCIs.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2025-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143541411","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fine-grained Classification Reveals Angiopathological Heterogeneity of Port Wine Stains Using OCT and OCTA Features.
IF 6.7 2区 医学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-02-26 DOI: 10.1109/JBHI.2025.3545931
Xiaofeng Deng, Defu Chen, Bowen Liu, Xiwan Zhang, Haixia Qiu, Wu Yuan, Hongliang Ren

Accurate classification of port wine stains (PWS, vascular malformations present at birth), is critical for subsequent treatment planning. However, the current method of classifying PWS based on the external skin appearance rarely reflects the underlying angiopathological heterogeneity of PWS lesions, resulting in inconsistent outcomes with the common vascular-targeted photodynamic therapy (V-PDT) treatments. Conversely, optical coherence tomography angiography (OCTA) is an ideal tool for visualizing the vascular malformations of PWS. Previous studies have shown no significant correlation between OCTA quantitative metrics and the PWS subtypes determined by the current classification approach. In this study, we propose a novel fine-grained classification method for PWS that integrates OCT and OCTA imaging. Utilizing a machine learning-based approach, we subdivided PWS into five distinct subtypes by unearthing the heterogeneity of hypodermic histopathology and vessel structures. Six quantitative metrics, encompassing vascular morphology and depth information of PWS lesions, were designed and statistically analyzed to evaluate angiopathological differences among the subtypes. Our classification reveals significant distinctions across all metrics compared to conventional skin appearance-based subtypes, demonstrating its ability to accurately capture angiopathological heterogeneity. This research marks the first attempt to classify PWS based on angiopathology, potentially guiding more effective subtyping and treatment strategies for PWS.

{"title":"Fine-grained Classification Reveals Angiopathological Heterogeneity of Port Wine Stains Using OCT and OCTA Features.","authors":"Xiaofeng Deng, Defu Chen, Bowen Liu, Xiwan Zhang, Haixia Qiu, Wu Yuan, Hongliang Ren","doi":"10.1109/JBHI.2025.3545931","DOIUrl":"https://doi.org/10.1109/JBHI.2025.3545931","url":null,"abstract":"<p><p>Accurate classification of port wine stains (PWS, vascular malformations present at birth), is critical for subsequent treatment planning. However, the current method of classifying PWS based on the external skin appearance rarely reflects the underlying angiopathological heterogeneity of PWS lesions, resulting in inconsistent outcomes with the common vascular-targeted photodynamic therapy (V-PDT) treatments. Conversely, optical coherence tomography angiography (OCTA) is an ideal tool for visualizing the vascular malformations of PWS. Previous studies have shown no significant correlation between OCTA quantitative metrics and the PWS subtypes determined by the current classification approach. In this study, we propose a novel fine-grained classification method for PWS that integrates OCT and OCTA imaging. Utilizing a machine learning-based approach, we subdivided PWS into five distinct subtypes by unearthing the heterogeneity of hypodermic histopathology and vessel structures. Six quantitative metrics, encompassing vascular morphology and depth information of PWS lesions, were designed and statistically analyzed to evaluate angiopathological differences among the subtypes. Our classification reveals significant distinctions across all metrics compared to conventional skin appearance-based subtypes, demonstrating its ability to accurately capture angiopathological heterogeneity. This research marks the first attempt to classify PWS based on angiopathology, potentially guiding more effective subtyping and treatment strategies for PWS.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2025-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143541603","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Real-Time Autoregressive Forecast of Cardiac Features for Psychophysiological Applications.
IF 6.7 2区 医学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-02-26 DOI: 10.1109/JBHI.2025.3546148
Cem O Yaldiz, David J Lin, Asim H Gazi, Gabriela Cestero, Chuoqi Chen, Bethany K Bracken, Aaron Winder, Spencer Lynn, Reza Sameni, Omer T Inan

Forecasting the near-exact moments of cardiac phases is crucial for several cardiovascular health applications. For instance, forecasts can enable the timing of specific stimuli (e.g., image or text presentation in psycholinguistic experiments) to coincide with cardiac phases like systole (cardiac ejection) and diastole (cardiac filling). This capability could be leveraged to enhance the amplitude of a subject's response, prompt them in fight-or-flight scenarios or conduct retrospective analysis for physiological predictive models. While autoregressive models have been employed for physiological signal forecasting, no prior study has explored their application to forecasting aortic opening and closing timings. This work addresses this gap by presenting a comprehensive comparative analysis of autoregressive models, including various forms of Kalman filter-based implementations, that use previously detected R-peak, aortic opening, and closing timings from electrocardiogram (ECG) and seismocardiogram (SCG) to forecast subsequent timings. We evaluate the robustness of these models to noise introduced in both SCG signals and the output of feature detectors. Our findings indicate that time-varying and multi-feature algorithms outperform others, with forecast errors below 2 ms for R-peak, below 3 ms for aortic opening timing, and below 10 ms for aortic closing timing. Importantly, we elucidate the distinct advantages of integrating multi-feature models, which improve noise robustness, and time-varying approaches, which adapt to rapid physiological changes. These models can be extended to a wide range of short-term physiological predictive systems, such as acute stress detection, neuromodulation sensor feedback, or muscle fatigue monitoring, broadening their applicability beyond cardiac feature forecasting.

{"title":"Real-Time Autoregressive Forecast of Cardiac Features for Psychophysiological Applications.","authors":"Cem O Yaldiz, David J Lin, Asim H Gazi, Gabriela Cestero, Chuoqi Chen, Bethany K Bracken, Aaron Winder, Spencer Lynn, Reza Sameni, Omer T Inan","doi":"10.1109/JBHI.2025.3546148","DOIUrl":"https://doi.org/10.1109/JBHI.2025.3546148","url":null,"abstract":"<p><p>Forecasting the near-exact moments of cardiac phases is crucial for several cardiovascular health applications. For instance, forecasts can enable the timing of specific stimuli (e.g., image or text presentation in psycholinguistic experiments) to coincide with cardiac phases like systole (cardiac ejection) and diastole (cardiac filling). This capability could be leveraged to enhance the amplitude of a subject's response, prompt them in fight-or-flight scenarios or conduct retrospective analysis for physiological predictive models. While autoregressive models have been employed for physiological signal forecasting, no prior study has explored their application to forecasting aortic opening and closing timings. This work addresses this gap by presenting a comprehensive comparative analysis of autoregressive models, including various forms of Kalman filter-based implementations, that use previously detected R-peak, aortic opening, and closing timings from electrocardiogram (ECG) and seismocardiogram (SCG) to forecast subsequent timings. We evaluate the robustness of these models to noise introduced in both SCG signals and the output of feature detectors. Our findings indicate that time-varying and multi-feature algorithms outperform others, with forecast errors below 2 ms for R-peak, below 3 ms for aortic opening timing, and below 10 ms for aortic closing timing. Importantly, we elucidate the distinct advantages of integrating multi-feature models, which improve noise robustness, and time-varying approaches, which adapt to rapid physiological changes. These models can be extended to a wide range of short-term physiological predictive systems, such as acute stress detection, neuromodulation sensor feedback, or muscle fatigue monitoring, broadening their applicability beyond cardiac feature forecasting.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2025-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143541700","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Self-Organized Prediction-Classification-Superposition of Longitudinal Cognitive Decline in Alzheimer's Disease: An Application to Novel Clinical Research Methodology.
IF 6.7 2区 医学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-02-26 DOI: 10.1109/JBHI.2025.3546020
Hiroyuki Sato, Ryoichi Hanazawa, Keisuke Suzuki, Atsushi Hashizume, Akihiro Hirakawa

Progressive cognitive decline spanning across decades is characteristic of Alzheimer's disease (AD). Various predictive models have been designed to realize its early onset and study the long-term trajectories of cognitive test scores across populations of interest. Research efforts have been geared towards superimposing patients' cognitive test scores with the long-term trajectory denoting gradual cognitive decline, while considering the heterogeneity of AD. Multiple trajectories representing cognitive assessment for the long-term have been developed based on various parameters, highlighting the importance of classifying several groups based on disease progression patterns. In this study, a novel method capable of self-organized prediction, classification, and the overlay of long-term cognitive trajectories based on short-term individual data was developed, based on statistical and differential equation modeling. Here, "self-organized" denotes a data-driven mechanism by which the prediction model adaptively configures its structure and parameters to classify individuals and estimate long-term trajectories. We validated the predictive accuracy of the proposed method on two cohorts: the Alzheimer's Disease Neuroimaging Initiative (ADNI) and the Japanese ADNI. We also presented two practical illustrations of the simultaneous evaluation of risk factor associated with both the onset and the longitudinal progression of AD, and an innovative randomized controlled trial design for AD that standardizes the heterogeneity of patients enrolled in a clinical trial. These resources would improve the power of statistical hypothesis testing and help evaluate the therapeutic effect. The application of predicting the trajectory of longitudinal disease progression goes beyond AD, and is especially relevant for progressive and neurodegenerative disorders.

{"title":"Self-Organized Prediction-Classification-Superposition of Longitudinal Cognitive Decline in Alzheimer's Disease: An Application to Novel Clinical Research Methodology.","authors":"Hiroyuki Sato, Ryoichi Hanazawa, Keisuke Suzuki, Atsushi Hashizume, Akihiro Hirakawa","doi":"10.1109/JBHI.2025.3546020","DOIUrl":"https://doi.org/10.1109/JBHI.2025.3546020","url":null,"abstract":"<p><p>Progressive cognitive decline spanning across decades is characteristic of Alzheimer's disease (AD). Various predictive models have been designed to realize its early onset and study the long-term trajectories of cognitive test scores across populations of interest. Research efforts have been geared towards superimposing patients' cognitive test scores with the long-term trajectory denoting gradual cognitive decline, while considering the heterogeneity of AD. Multiple trajectories representing cognitive assessment for the long-term have been developed based on various parameters, highlighting the importance of classifying several groups based on disease progression patterns. In this study, a novel method capable of self-organized prediction, classification, and the overlay of long-term cognitive trajectories based on short-term individual data was developed, based on statistical and differential equation modeling. Here, \"self-organized\" denotes a data-driven mechanism by which the prediction model adaptively configures its structure and parameters to classify individuals and estimate long-term trajectories. We validated the predictive accuracy of the proposed method on two cohorts: the Alzheimer's Disease Neuroimaging Initiative (ADNI) and the Japanese ADNI. We also presented two practical illustrations of the simultaneous evaluation of risk factor associated with both the onset and the longitudinal progression of AD, and an innovative randomized controlled trial design for AD that standardizes the heterogeneity of patients enrolled in a clinical trial. These resources would improve the power of statistical hypothesis testing and help evaluate the therapeutic effect. The application of predicting the trajectory of longitudinal disease progression goes beyond AD, and is especially relevant for progressive and neurodegenerative disorders.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7,"publicationDate":"2025-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143541429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Journal of Biomedical and Health Informatics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1