Noncontact Neonatal Respiration Rate Estimation Using Machine Vision

Daniel G. Kyrollos, J. Tanner, K. Greenwood, J. Harrold, J. Green
{"title":"Noncontact Neonatal Respiration Rate Estimation Using Machine Vision","authors":"Daniel G. Kyrollos, J. Tanner, K. Greenwood, J. Harrold, J. Green","doi":"10.1109/SAS51076.2021.9530013","DOIUrl":null,"url":null,"abstract":"Using video data of neonates admitted to the neonatal intensive care unit (NICU) we developed and compared the performance of various techniques for noncontact respiration rate (RR) estimation. Data were collected from an overhead colour and depth (RGB-D) camera, while gold standard physiologic data were captured from the hospital's patient monitor. We developed a deep learning algorithm for automatic detection of the face and chest area of the neonate. We then use this algorithm to identify time periods with low patient motion and to locate regions of interest for RR estimation. We produce a respiration signal by quantifying the chest movement using the raw RGB video, motion-magnified RGB video, and depth video. We compare this to a respiration signal derived from the changes in the green channel of the face. We were able to estimate RR from motion-magnified video and depth video, achieving a mean absolute error of less than 3.5 BPM for 69% and 67% of the time for each stream, respectively. We achieve this result without the need for skin segmentation and can apply our technique to fully clothed neonatal patients. We show that similar performance can be achieved using the depth and colour stream using this technique.","PeriodicalId":224327,"journal":{"name":"2021 IEEE Sensors Applications Symposium (SAS)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"12","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE Sensors Applications Symposium (SAS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SAS51076.2021.9530013","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 12

Abstract

Using video data of neonates admitted to the neonatal intensive care unit (NICU) we developed and compared the performance of various techniques for noncontact respiration rate (RR) estimation. Data were collected from an overhead colour and depth (RGB-D) camera, while gold standard physiologic data were captured from the hospital's patient monitor. We developed a deep learning algorithm for automatic detection of the face and chest area of the neonate. We then use this algorithm to identify time periods with low patient motion and to locate regions of interest for RR estimation. We produce a respiration signal by quantifying the chest movement using the raw RGB video, motion-magnified RGB video, and depth video. We compare this to a respiration signal derived from the changes in the green channel of the face. We were able to estimate RR from motion-magnified video and depth video, achieving a mean absolute error of less than 3.5 BPM for 69% and 67% of the time for each stream, respectively. We achieve this result without the need for skin segmentation and can apply our technique to fully clothed neonatal patients. We show that similar performance can be achieved using the depth and colour stream using this technique.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
非接触新生儿呼吸率估计使用机器视觉
利用新生儿重症监护病房(NICU)新生儿的视频数据,我们开发并比较了各种非接触呼吸速率(RR)估计技术的性能。数据从头顶的颜色和深度(RGB-D)摄像机收集,而金标准生理数据从医院的患者监视器捕获。我们开发了一种深度学习算法,用于自动检测新生儿的面部和胸部区域。然后,我们使用该算法来识别患者运动较低的时间段,并定位感兴趣的区域进行RR估计。我们通过使用原始RGB视频、运动放大RGB视频和深度视频量化胸部运动来产生呼吸信号。我们将其与面部绿色通道变化产生的呼吸信号进行比较。我们能够从运动放大视频和深度视频中估计出RR,在每个流的69%和67%的时间内,平均绝对误差分别小于3.5 BPM。我们实现了这一结果,而不需要皮肤分割,并可以将我们的技术应用于穿衣服的新生儿患者。我们表明,使用这种技术使用深度和颜色流可以实现类似的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Quasi-Static Magnetic Localization of Capsule Endoscopes with an Active Integrated Coil Comparing BLE and NB-IoT as Communication Options for Smart Viticulture IoT Applications Self-Compensation of Cross Influences using Spectral Transmission Ratios for Optical Fiber Sensors in Lithium-Ion Batteries Polycrystalline silicon photovoltaic harvesting for indoor IoT systems under red- far red artificial light SCPI: IoT and the Déjà Vu of Instrument Control
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1