{"title":"TrustSleepNet: A Trustable Deep Multimodal Network for Sleep Stage Classification","authors":"Guanjie Huang, Fenglong Ma","doi":"10.1109/BHI56158.2022.9926875","DOIUrl":null,"url":null,"abstract":"Correctly classifying different sleep stages is a critical and prerequisite step in diagnosing sleep-related issues. In practice, the clinical experts must manually review the polysomnography (PSG) recordings to classify sleep stages. Such a procedure is time-consuming, laborious, and potentially prone to human subjective errors. Deep learning-based methods have been successfully adopted for automatically classifying sleep stages in recent years. However, they cannot simply say “I do not know” when they are uncertain in their predictions, which may easily create significant risk in clinical applications, despite their good performance. To address this issue, we propose a deep model, named TrustSleepNet, which contains evidential learning and cross-modality attention modules. Evidential learning predicts the probability density of the classes, which can learn an uncertainty score and make the prediction trustable in real-world clinical applications. Cross-modality attention adaptively fuses multimodal PSG data by enhancing the significant ones and suppressing irrelevant ones. Experimental results demonstrate that TrustSleepNet outperforms state-of-the-art benchmark methods, and the uncertainty score makes the prediction more trustable and reliable.","PeriodicalId":347210,"journal":{"name":"2022 IEEE-EMBS International Conference on Biomedical and Health Informatics (BHI)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE-EMBS International Conference on Biomedical and Health Informatics (BHI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/BHI56158.2022.9926875","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Correctly classifying different sleep stages is a critical and prerequisite step in diagnosing sleep-related issues. In practice, the clinical experts must manually review the polysomnography (PSG) recordings to classify sleep stages. Such a procedure is time-consuming, laborious, and potentially prone to human subjective errors. Deep learning-based methods have been successfully adopted for automatically classifying sleep stages in recent years. However, they cannot simply say “I do not know” when they are uncertain in their predictions, which may easily create significant risk in clinical applications, despite their good performance. To address this issue, we propose a deep model, named TrustSleepNet, which contains evidential learning and cross-modality attention modules. Evidential learning predicts the probability density of the classes, which can learn an uncertainty score and make the prediction trustable in real-world clinical applications. Cross-modality attention adaptively fuses multimodal PSG data by enhancing the significant ones and suppressing irrelevant ones. Experimental results demonstrate that TrustSleepNet outperforms state-of-the-art benchmark methods, and the uncertainty score makes the prediction more trustable and reliable.