Daniel G. Kyrollos, J. Tanner, K. Greenwood, J. Harrold, J. Green
{"title":"Noncontact Neonatal Respiration Rate Estimation Using Machine Vision","authors":"Daniel G. Kyrollos, J. Tanner, K. Greenwood, J. Harrold, J. Green","doi":"10.1109/SAS51076.2021.9530013","DOIUrl":null,"url":null,"abstract":"Using video data of neonates admitted to the neonatal intensive care unit (NICU) we developed and compared the performance of various techniques for noncontact respiration rate (RR) estimation. Data were collected from an overhead colour and depth (RGB-D) camera, while gold standard physiologic data were captured from the hospital's patient monitor. We developed a deep learning algorithm for automatic detection of the face and chest area of the neonate. We then use this algorithm to identify time periods with low patient motion and to locate regions of interest for RR estimation. We produce a respiration signal by quantifying the chest movement using the raw RGB video, motion-magnified RGB video, and depth video. We compare this to a respiration signal derived from the changes in the green channel of the face. We were able to estimate RR from motion-magnified video and depth video, achieving a mean absolute error of less than 3.5 BPM for 69% and 67% of the time for each stream, respectively. We achieve this result without the need for skin segmentation and can apply our technique to fully clothed neonatal patients. We show that similar performance can be achieved using the depth and colour stream using this technique.","PeriodicalId":224327,"journal":{"name":"2021 IEEE Sensors Applications Symposium (SAS)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"12","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE Sensors Applications Symposium (SAS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SAS51076.2021.9530013","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 12
Abstract
Using video data of neonates admitted to the neonatal intensive care unit (NICU) we developed and compared the performance of various techniques for noncontact respiration rate (RR) estimation. Data were collected from an overhead colour and depth (RGB-D) camera, while gold standard physiologic data were captured from the hospital's patient monitor. We developed a deep learning algorithm for automatic detection of the face and chest area of the neonate. We then use this algorithm to identify time periods with low patient motion and to locate regions of interest for RR estimation. We produce a respiration signal by quantifying the chest movement using the raw RGB video, motion-magnified RGB video, and depth video. We compare this to a respiration signal derived from the changes in the green channel of the face. We were able to estimate RR from motion-magnified video and depth video, achieving a mean absolute error of less than 3.5 BPM for 69% and 67% of the time for each stream, respectively. We achieve this result without the need for skin segmentation and can apply our technique to fully clothed neonatal patients. We show that similar performance can be achieved using the depth and colour stream using this technique.