Md Nazmul Islam Shuzan, Moajjem Hossain Chowdhury, Saadia Binte Alam, Mamun Bin Ibne Reaz, Muhammad Salman Khan, M. Murugappan, Muhammad E. H. Chowdhury
{"title":"PPG2RespNet:用于从光心动图(PPG)信号合成和监测呼吸信号的深度学习模型","authors":"Md Nazmul Islam Shuzan, Moajjem Hossain Chowdhury, Saadia Binte Alam, Mamun Bin Ibne Reaz, Muhammad Salman Khan, M. Murugappan, Muhammad E. H. Chowdhury","doi":"10.1007/s13246-024-01482-1","DOIUrl":null,"url":null,"abstract":"<p>Breathing conditions affect a wide range of people, including those with respiratory issues like asthma and sleep apnea. Smartwatches with photoplethysmogram (PPG) sensors can monitor breathing. However, current methods have limitations due to manual parameter tuning and pre-defined features. To address this challenge, we propose the PPG2RespNet deep-learning framework. It draws inspiration from the UNet and UNet + + models. It uses three publicly available PPG datasets (VORTAL, BIDMC, Capnobase) to autonomously and efficiently extract respiratory signals. The datasets contain PPG data from different groups, such as intensive care unit patients, pediatric patients, and healthy subjects. Unlike conventional U-Net architectures, PPG2RespNet introduces layered skip connections, establishing hierarchical and dense connections for robust signal extraction. The bottleneck layer of the model is also modified to enhance the extraction of latent features. To evaluate PPG2RespNet’s performance, we assessed its ability to reconstruct respiratory signals and estimate respiration rates. The model outperformed other models in signal-to-signal synthesis, achieving exceptional Pearson correlation coefficients (PCCs) with ground truth respiratory signals: 0.94 for BIDMC, 0.95 for VORTAL, and 0.96 for Capnobase. With mean absolute errors (MAE) of 0.69, 0.58, and 0.11 for the respective datasets, the model exhibited remarkable precision in estimating respiration rates. We used regression and Bland-Altman plots to analyze the predictions of the model in comparison to the ground truth. PPG2RespNet can thus obtain high-quality respiratory signals non-invasively, making it a valuable tool for calculating respiration rates.</p>","PeriodicalId":2,"journal":{"name":"ACS Applied Bio Materials","volume":null,"pages":null},"PeriodicalIF":4.6000,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"PPG2RespNet: a deep learning model for respirational signal synthesis and monitoring from photoplethysmography (PPG) signal\",\"authors\":\"Md Nazmul Islam Shuzan, Moajjem Hossain Chowdhury, Saadia Binte Alam, Mamun Bin Ibne Reaz, Muhammad Salman Khan, M. Murugappan, Muhammad E. H. Chowdhury\",\"doi\":\"10.1007/s13246-024-01482-1\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Breathing conditions affect a wide range of people, including those with respiratory issues like asthma and sleep apnea. Smartwatches with photoplethysmogram (PPG) sensors can monitor breathing. However, current methods have limitations due to manual parameter tuning and pre-defined features. To address this challenge, we propose the PPG2RespNet deep-learning framework. It draws inspiration from the UNet and UNet + + models. It uses three publicly available PPG datasets (VORTAL, BIDMC, Capnobase) to autonomously and efficiently extract respiratory signals. The datasets contain PPG data from different groups, such as intensive care unit patients, pediatric patients, and healthy subjects. Unlike conventional U-Net architectures, PPG2RespNet introduces layered skip connections, establishing hierarchical and dense connections for robust signal extraction. The bottleneck layer of the model is also modified to enhance the extraction of latent features. To evaluate PPG2RespNet’s performance, we assessed its ability to reconstruct respiratory signals and estimate respiration rates. The model outperformed other models in signal-to-signal synthesis, achieving exceptional Pearson correlation coefficients (PCCs) with ground truth respiratory signals: 0.94 for BIDMC, 0.95 for VORTAL, and 0.96 for Capnobase. With mean absolute errors (MAE) of 0.69, 0.58, and 0.11 for the respective datasets, the model exhibited remarkable precision in estimating respiration rates. We used regression and Bland-Altman plots to analyze the predictions of the model in comparison to the ground truth. PPG2RespNet can thus obtain high-quality respiratory signals non-invasively, making it a valuable tool for calculating respiration rates.</p>\",\"PeriodicalId\":2,\"journal\":{\"name\":\"ACS Applied Bio Materials\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":4.6000,\"publicationDate\":\"2024-09-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ACS Applied Bio Materials\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1007/s13246-024-01482-1\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"MATERIALS SCIENCE, BIOMATERIALS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACS Applied Bio Materials","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1007/s13246-024-01482-1","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"MATERIALS SCIENCE, BIOMATERIALS","Score":null,"Total":0}
PPG2RespNet: a deep learning model for respirational signal synthesis and monitoring from photoplethysmography (PPG) signal
Breathing conditions affect a wide range of people, including those with respiratory issues like asthma and sleep apnea. Smartwatches with photoplethysmogram (PPG) sensors can monitor breathing. However, current methods have limitations due to manual parameter tuning and pre-defined features. To address this challenge, we propose the PPG2RespNet deep-learning framework. It draws inspiration from the UNet and UNet + + models. It uses three publicly available PPG datasets (VORTAL, BIDMC, Capnobase) to autonomously and efficiently extract respiratory signals. The datasets contain PPG data from different groups, such as intensive care unit patients, pediatric patients, and healthy subjects. Unlike conventional U-Net architectures, PPG2RespNet introduces layered skip connections, establishing hierarchical and dense connections for robust signal extraction. The bottleneck layer of the model is also modified to enhance the extraction of latent features. To evaluate PPG2RespNet’s performance, we assessed its ability to reconstruct respiratory signals and estimate respiration rates. The model outperformed other models in signal-to-signal synthesis, achieving exceptional Pearson correlation coefficients (PCCs) with ground truth respiratory signals: 0.94 for BIDMC, 0.95 for VORTAL, and 0.96 for Capnobase. With mean absolute errors (MAE) of 0.69, 0.58, and 0.11 for the respective datasets, the model exhibited remarkable precision in estimating respiration rates. We used regression and Bland-Altman plots to analyze the predictions of the model in comparison to the ground truth. PPG2RespNet can thus obtain high-quality respiratory signals non-invasively, making it a valuable tool for calculating respiration rates.