Xinyu Li, Yanyi Zhang, Mengzhu Li, Shuhong Chen, Farneth R Austin, Ivan Marsic, Randall S Burd
{"title":"Online Process Phase Detection Using Multimodal Deep Learning.","authors":"Xinyu Li, Yanyi Zhang, Mengzhu Li, Shuhong Chen, Farneth R Austin, Ivan Marsic, Randall S Burd","doi":"10.1109/UEMCON.2016.7777912","DOIUrl":null,"url":null,"abstract":"<p><p>We present a multimodal deep-learning structure that automatically predicts phases of the trauma resuscitation process in real-time. The system first pre-processes the audio and video streams captured by a Kinect's built-in microphone array and depth sensor. A multimodal deep learning structure then extracts video and audio features, which are later combined through a \"slow fusion\" model. The final decision is then made from the combined features through a modified softmax classification layer. The model was trained on 20 trauma resuscitation cases (>13 hours), and was tested on 5 other cases. Our results showed over 80% online detection accuracy with 0.7 F-Score, outperforming previous systems.</p>","PeriodicalId":92155,"journal":{"name":"Ubiquitous Computing, Electronics & Mobile Communication Conference (UEMCON), IEEE Annual","volume":"2016 ","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/UEMCON.2016.7777912","citationCount":"18","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Ubiquitous Computing, Electronics & Mobile Communication Conference (UEMCON), IEEE Annual","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/UEMCON.2016.7777912","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2016/12/12 0:00:00","PubModel":"Epub","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 18
Abstract
We present a multimodal deep-learning structure that automatically predicts phases of the trauma resuscitation process in real-time. The system first pre-processes the audio and video streams captured by a Kinect's built-in microphone array and depth sensor. A multimodal deep learning structure then extracts video and audio features, which are later combined through a "slow fusion" model. The final decision is then made from the combined features through a modified softmax classification layer. The model was trained on 20 trauma resuscitation cases (>13 hours), and was tested on 5 other cases. Our results showed over 80% online detection accuracy with 0.7 F-Score, outperforming previous systems.