{"title":"OSA-CCNN: Obstructive Sleep Apnea Detection Based on a Composite Deep Convolution Neural Network Model using Single-Lead ECG signal","authors":"Yu Zhou, Yinxian He, Kyungtae Kang","doi":"10.1109/BIBM55620.2022.9995675","DOIUrl":null,"url":null,"abstract":"Obstructive sleep apnea (OSA) is a common sleeping issue that makes it difficult to breathe while you sleep and is linked to a number of other disorders, including cardiovascular conditions, such as hypertension and coronary heart disease. Nocturnal polysomnography (PSG) is one of the clinical diagnostic criteria for OSA, which is a painful and expensive form of diagnosis as it requires manual interpretation by experts and takes a lot of time. ECG-based techniques for diagnosing OSA have been introduced to alleviate these problems, but the most of solutions that have been put up thus far rely on feature engineering, which requires substantial specialist knowledge and expertise. In this study, we present a novel approach for classifying OSA based on a single-lead ECG signal conversion and a composite deep convolutional neural network model. The ECG signal is transformed into scalogram images with heart rate variability (HRV) characteristics and Gramian Angular Field (GAF) matrix images with temporal characteristics, incorporating the temporal properties of the ECG, to create the hybrid image dataset. The composite model contains three sub-convolutional neural networks, two of which utilize fine-tuned AlexNet and ResNet models, the third is a convolutional neural network with five residual blocks that are evaluated by a voting mechanism. The PhysioNet Apnea-ECG database was used to train and evaluate the proposed model. The results show that the proposed classifier achieved 90.93% accuracy, 83.86% sensitivity, 95.29% specificity, and 0.89 AUC on hybrid image datasets.","PeriodicalId":210337,"journal":{"name":"2022 IEEE International Conference on Bioinformatics and Biomedicine (BIBM)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE International Conference on Bioinformatics and Biomedicine (BIBM)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/BIBM55620.2022.9995675","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Obstructive sleep apnea (OSA) is a common sleeping issue that makes it difficult to breathe while you sleep and is linked to a number of other disorders, including cardiovascular conditions, such as hypertension and coronary heart disease. Nocturnal polysomnography (PSG) is one of the clinical diagnostic criteria for OSA, which is a painful and expensive form of diagnosis as it requires manual interpretation by experts and takes a lot of time. ECG-based techniques for diagnosing OSA have been introduced to alleviate these problems, but the most of solutions that have been put up thus far rely on feature engineering, which requires substantial specialist knowledge and expertise. In this study, we present a novel approach for classifying OSA based on a single-lead ECG signal conversion and a composite deep convolutional neural network model. The ECG signal is transformed into scalogram images with heart rate variability (HRV) characteristics and Gramian Angular Field (GAF) matrix images with temporal characteristics, incorporating the temporal properties of the ECG, to create the hybrid image dataset. The composite model contains three sub-convolutional neural networks, two of which utilize fine-tuned AlexNet and ResNet models, the third is a convolutional neural network with five residual blocks that are evaluated by a voting mechanism. The PhysioNet Apnea-ECG database was used to train and evaluate the proposed model. The results show that the proposed classifier achieved 90.93% accuracy, 83.86% sensitivity, 95.29% specificity, and 0.89 AUC on hybrid image datasets.