Machine Assisted Video Tagging of Elderly Activities in K-Log Centre

Chanwoong Lee, Hyorim Choi, Shapna Muralidharan, H. Ko, Byounghyun Yoo, G. Kim
{"title":"Machine Assisted Video Tagging of Elderly Activities in K-Log Centre","authors":"Chanwoong Lee, Hyorim Choi, Shapna Muralidharan, H. Ko, Byounghyun Yoo, G. Kim","doi":"10.1109/MFI49285.2020.9235269","DOIUrl":null,"url":null,"abstract":"In a rapidly aging society, like in South Korea, the number of Alzheimer’s Disease (AD) patients is a significant public health problem, and the need for specialized healthcare centers is in high demand. Healthcare providers generally rely on caregivers (CG) for elderly persons with AD to monitor and help them in their daily activities. K-Log Centre is a healthcare provider located in Korea to help AD patients meet their daily needs with assistance from CG in the center. The CG’S in the K-Log Centre need to attend the patients’ unique demands and everyday essentials for long-term care. Moreover, the CG also describes and logs the day-to-day activities in Activities of Daily Living (ADL) log, which comprises various events in detail. The CG’s logging activities can overburden their work, leading to appalling results like suffering quality of elderly care and hiring additional CG’s to maintain the quality of care and a negative feedback cycle. In this paper, we have analyzed this impending issue in K-Log Centre and propose a method to facilitate machine-assisted human tagging of videos for logging of the elderly activities using Human Activity Recognition (HAR). To enable the scenario, we use a You Only Look Once (YOLO-v3)-based deep learning method for object detection and use it for HAR creating a multi-modal machine-assisted human tagging of videos. The proposed algorithm detects the HAR with a precision of 98.4%. After designing the HAR model, we have tested it in a live video feed from the K-Log Centre to test the proposed method. The model showed an accuracy of 81.4% in live data, reducing the logging activities of the CG’s.","PeriodicalId":446154,"journal":{"name":"2020 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/MFI49285.2020.9235269","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

Abstract

In a rapidly aging society, like in South Korea, the number of Alzheimer’s Disease (AD) patients is a significant public health problem, and the need for specialized healthcare centers is in high demand. Healthcare providers generally rely on caregivers (CG) for elderly persons with AD to monitor and help them in their daily activities. K-Log Centre is a healthcare provider located in Korea to help AD patients meet their daily needs with assistance from CG in the center. The CG’S in the K-Log Centre need to attend the patients’ unique demands and everyday essentials for long-term care. Moreover, the CG also describes and logs the day-to-day activities in Activities of Daily Living (ADL) log, which comprises various events in detail. The CG’s logging activities can overburden their work, leading to appalling results like suffering quality of elderly care and hiring additional CG’s to maintain the quality of care and a negative feedback cycle. In this paper, we have analyzed this impending issue in K-Log Centre and propose a method to facilitate machine-assisted human tagging of videos for logging of the elderly activities using Human Activity Recognition (HAR). To enable the scenario, we use a You Only Look Once (YOLO-v3)-based deep learning method for object detection and use it for HAR creating a multi-modal machine-assisted human tagging of videos. The proposed algorithm detects the HAR with a precision of 98.4%. After designing the HAR model, we have tested it in a live video feed from the K-Log Centre to test the proposed method. The model showed an accuracy of 81.4% in live data, reducing the logging activities of the CG’s.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
K-Log中心长者活动的机器辅助视像标签
在像韩国这样的快速老龄化社会中,阿尔茨海默病(AD)患者的数量是一个重大的公共卫生问题,对专业医疗中心的需求很高。医疗保健提供者通常依靠照顾者(CG)对老年AD患者进行监测和帮助他们进行日常活动。K-Log中心是一家位于韩国的医疗保健提供商,通过中心CG的协助,帮助AD患者满足他们的日常需求。在K-Log中心的CG需要照顾病人的独特需求和日常必需品的长期护理。此外,CG还在日常生活活动(ADL)日志中描述和记录日常活动,其中包括各种事件的详细信息。CG的记录活动可能会使他们的工作负担过重,导致可怕的结果,比如老年人护理质量下降,需要雇佣额外的CG来维持护理质量,并形成负反馈循环。在本文中,我们分析了K-Log中心即将出现的这个问题,并提出了一种使用人类活动识别(HAR)促进机器辅助人类标记老年人活动记录视频的方法。为了实现该场景,我们使用基于You Only Look Once (YOLO-v3)的深度学习方法进行对象检测,并将其用于HAR,创建多模态机器辅助的视频人类标记。该算法检测HAR的精度为98.4%。在设计HAR模型后,我们在K-Log中心的实时视频馈送中对其进行了测试,以测试所提出的方法。该模型在实时数据中显示出81.4%的精度,减少了CG的测井活动。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
OAFuser: Online Adaptive Extended Object Tracking and Fusion using automotive Radar Detections Observability driven Multi-modal Line-scan Camera Calibration Localization and velocity estimation based on multiple bistatic measurements A Continuous Probabilistic Origin Association Filter for Extended Object Tracking Towards Automatic Classification of Fragmented Rock Piles via Proprioceptive Sensing and Wavelet Analysis
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1