{"title":"Deep Learning Classification of Fetal Cardiotocography Data with Differential Privacy","authors":"Ashish Kumar Lal, S. Karthikeyan","doi":"10.1109/CSI54720.2022.9924087","DOIUrl":null,"url":null,"abstract":"Cardiotocography (CTG) is a continuous recording of the fetal heart rate (FHR) obtained from an ultrasound transducer placed on the mother's abdomen. In common practice, obstetricians visually inspect the CTG signal to monitor the condition of the fetus's heart. This manual inspection is not reliable as it is prone to human error and biases. To overcome these short-comings, researchers had developed various AI-based diagnosis models for the automatic classification of CTG data. A few recent research had reported that neural network outperforms other machine learning models. Despite the advancements in automatic classification techniques, the adoption of these AI models has not been widespread due to the requirement for privacy of the patient record. The medical institutions are unwilling to share or publish these records, due to ethical and legal reasons. This discourages the deployment of such AI models and consequently hinders active and collaborative research work. To alleviate the privacy breach concern, we used a deep privacy-preserving CTG data classification model by adopting Differential Privacy (D P) framework. DP has widely been accepted as the gold standard of privacy guarantee. As privacy comes at an additional cost of slight downgrade in the model's performance. To mitigate this performance degradation, we have proposed a two stage binary classification which improves the model performance while maintaining the same privacy guarantee. The experimental results show that an improved performance of the proposed model with accuracy increased to 0.91 from 0.89 with E = 10 of (E,6)- Differential Privacy.","PeriodicalId":221137,"journal":{"name":"2022 International Conference on Connected Systems & Intelligence (CSI)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 International Conference on Connected Systems & Intelligence (CSI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CSI54720.2022.9924087","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Cardiotocography (CTG) is a continuous recording of the fetal heart rate (FHR) obtained from an ultrasound transducer placed on the mother's abdomen. In common practice, obstetricians visually inspect the CTG signal to monitor the condition of the fetus's heart. This manual inspection is not reliable as it is prone to human error and biases. To overcome these short-comings, researchers had developed various AI-based diagnosis models for the automatic classification of CTG data. A few recent research had reported that neural network outperforms other machine learning models. Despite the advancements in automatic classification techniques, the adoption of these AI models has not been widespread due to the requirement for privacy of the patient record. The medical institutions are unwilling to share or publish these records, due to ethical and legal reasons. This discourages the deployment of such AI models and consequently hinders active and collaborative research work. To alleviate the privacy breach concern, we used a deep privacy-preserving CTG data classification model by adopting Differential Privacy (D P) framework. DP has widely been accepted as the gold standard of privacy guarantee. As privacy comes at an additional cost of slight downgrade in the model's performance. To mitigate this performance degradation, we have proposed a two stage binary classification which improves the model performance while maintaining the same privacy guarantee. The experimental results show that an improved performance of the proposed model with accuracy increased to 0.91 from 0.89 with E = 10 of (E,6)- Differential Privacy.