Chanwoong Lee, Hyorim Choi, Shapna Muralidharan, H. Ko, Byounghyun Yoo, G. Kim
{"title":"Machine Assisted Video Tagging of Elderly Activities in K-Log Centre","authors":"Chanwoong Lee, Hyorim Choi, Shapna Muralidharan, H. Ko, Byounghyun Yoo, G. Kim","doi":"10.1109/MFI49285.2020.9235269","DOIUrl":null,"url":null,"abstract":"In a rapidly aging society, like in South Korea, the number of Alzheimer’s Disease (AD) patients is a significant public health problem, and the need for specialized healthcare centers is in high demand. Healthcare providers generally rely on caregivers (CG) for elderly persons with AD to monitor and help them in their daily activities. K-Log Centre is a healthcare provider located in Korea to help AD patients meet their daily needs with assistance from CG in the center. The CG’S in the K-Log Centre need to attend the patients’ unique demands and everyday essentials for long-term care. Moreover, the CG also describes and logs the day-to-day activities in Activities of Daily Living (ADL) log, which comprises various events in detail. The CG’s logging activities can overburden their work, leading to appalling results like suffering quality of elderly care and hiring additional CG’s to maintain the quality of care and a negative feedback cycle. In this paper, we have analyzed this impending issue in K-Log Centre and propose a method to facilitate machine-assisted human tagging of videos for logging of the elderly activities using Human Activity Recognition (HAR). To enable the scenario, we use a You Only Look Once (YOLO-v3)-based deep learning method for object detection and use it for HAR creating a multi-modal machine-assisted human tagging of videos. The proposed algorithm detects the HAR with a precision of 98.4%. After designing the HAR model, we have tested it in a live video feed from the K-Log Centre to test the proposed method. The model showed an accuracy of 81.4% in live data, reducing the logging activities of the CG’s.","PeriodicalId":446154,"journal":{"name":"2020 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/MFI49285.2020.9235269","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
In a rapidly aging society, like in South Korea, the number of Alzheimer’s Disease (AD) patients is a significant public health problem, and the need for specialized healthcare centers is in high demand. Healthcare providers generally rely on caregivers (CG) for elderly persons with AD to monitor and help them in their daily activities. K-Log Centre is a healthcare provider located in Korea to help AD patients meet their daily needs with assistance from CG in the center. The CG’S in the K-Log Centre need to attend the patients’ unique demands and everyday essentials for long-term care. Moreover, the CG also describes and logs the day-to-day activities in Activities of Daily Living (ADL) log, which comprises various events in detail. The CG’s logging activities can overburden their work, leading to appalling results like suffering quality of elderly care and hiring additional CG’s to maintain the quality of care and a negative feedback cycle. In this paper, we have analyzed this impending issue in K-Log Centre and propose a method to facilitate machine-assisted human tagging of videos for logging of the elderly activities using Human Activity Recognition (HAR). To enable the scenario, we use a You Only Look Once (YOLO-v3)-based deep learning method for object detection and use it for HAR creating a multi-modal machine-assisted human tagging of videos. The proposed algorithm detects the HAR with a precision of 98.4%. After designing the HAR model, we have tested it in a live video feed from the K-Log Centre to test the proposed method. The model showed an accuracy of 81.4% in live data, reducing the logging activities of the CG’s.
在像韩国这样的快速老龄化社会中,阿尔茨海默病(AD)患者的数量是一个重大的公共卫生问题,对专业医疗中心的需求很高。医疗保健提供者通常依靠照顾者(CG)对老年AD患者进行监测和帮助他们进行日常活动。K-Log中心是一家位于韩国的医疗保健提供商,通过中心CG的协助,帮助AD患者满足他们的日常需求。在K-Log中心的CG需要照顾病人的独特需求和日常必需品的长期护理。此外,CG还在日常生活活动(ADL)日志中描述和记录日常活动,其中包括各种事件的详细信息。CG的记录活动可能会使他们的工作负担过重,导致可怕的结果,比如老年人护理质量下降,需要雇佣额外的CG来维持护理质量,并形成负反馈循环。在本文中,我们分析了K-Log中心即将出现的这个问题,并提出了一种使用人类活动识别(HAR)促进机器辅助人类标记老年人活动记录视频的方法。为了实现该场景,我们使用基于You Only Look Once (YOLO-v3)的深度学习方法进行对象检测,并将其用于HAR,创建多模态机器辅助的视频人类标记。该算法检测HAR的精度为98.4%。在设计HAR模型后,我们在K-Log中心的实时视频馈送中对其进行了测试,以测试所提出的方法。该模型在实时数据中显示出81.4%的精度,减少了CG的测井活动。