Pose2Gait: Extracting Gait Features from Monocular Video of Individuals with Dementia

Caroline Malin-Mayor, Vida Adeli, Andrea Sabo, S. Noritsyn, C. Gorodetsky, A. Fasano, A. Iaboni, B. Taati
{"title":"Pose2Gait: Extracting Gait Features from Monocular Video of Individuals with Dementia","authors":"Caroline Malin-Mayor, Vida Adeli, Andrea Sabo, S. Noritsyn, C. Gorodetsky, A. Fasano, A. Iaboni, B. Taati","doi":"10.48550/arXiv.2308.11484","DOIUrl":null,"url":null,"abstract":"Video-based ambient monitoring of gait for older adults with dementia has the potential to detect negative changes in health and allow clinicians and caregivers to intervene early to prevent falls or hospitalizations. Computer vision-based pose tracking models can process video data automatically and extract joint locations; however, publicly available models are not optimized for gait analysis on older adults or clinical populations. In this work we train a deep neural network to map from a two dimensional pose sequence, extracted from a video of an individual walking down a hallway toward a wall-mounted camera, to a set of three-dimensional spatiotemporal gait features averaged over the walking sequence. The data of individuals with dementia used in this work was captured at two sites using a wall-mounted system to collect the video and depth information used to train and evaluate our model. Our Pose2Gait model is able to extract velocity and step length values from the video that are correlated with the features from the depth camera, with Spearman's correlation coefficients of .83 and .60 respectively, showing that three dimensional spatiotemporal features can be predicted from monocular video. Future work remains to improve the accuracy of other features, such as step time and step width, and test the utility of the predicted values for detecting meaningful changes in gait during longitudinal ambient monitoring.","PeriodicalId":344481,"journal":{"name":"PRIME@MICCAI","volume":"61 2 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"PRIME@MICCAI","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.48550/arXiv.2308.11484","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Video-based ambient monitoring of gait for older adults with dementia has the potential to detect negative changes in health and allow clinicians and caregivers to intervene early to prevent falls or hospitalizations. Computer vision-based pose tracking models can process video data automatically and extract joint locations; however, publicly available models are not optimized for gait analysis on older adults or clinical populations. In this work we train a deep neural network to map from a two dimensional pose sequence, extracted from a video of an individual walking down a hallway toward a wall-mounted camera, to a set of three-dimensional spatiotemporal gait features averaged over the walking sequence. The data of individuals with dementia used in this work was captured at two sites using a wall-mounted system to collect the video and depth information used to train and evaluate our model. Our Pose2Gait model is able to extract velocity and step length values from the video that are correlated with the features from the depth camera, with Spearman's correlation coefficients of .83 and .60 respectively, showing that three dimensional spatiotemporal features can be predicted from monocular video. Future work remains to improve the accuracy of other features, such as step time and step width, and test the utility of the predicted values for detecting meaningful changes in gait during longitudinal ambient monitoring.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
pose2步态:从痴呆患者的单目视频中提取步态特征
基于视频的老年痴呆患者步态环境监测有可能发现健康方面的负面变化,并使临床医生和护理人员能够及早干预,防止跌倒或住院。基于计算机视觉的姿态跟踪模型可以自动处理视频数据并提取关节位置;然而,公开可用的模型并没有优化老年人或临床人群的步态分析。在这项工作中,我们训练了一个深度神经网络,从一个人沿着走廊走向壁挂式摄像机的视频中提取的二维姿势序列,映射到一组三维时空步态特征,这些特征是在行走序列上平均的。在这项工作中使用的痴呆症患者的数据是在两个地点使用壁挂式系统收集视频和深度信息,用于训练和评估我们的模型。我们的pose2步态模型能够从视频中提取与深度摄像机特征相关的速度和步长值,Spearman相关系数分别为0.83和0.60,表明可以从单目视频中预测三维时空特征。未来的工作仍然是提高其他特征的准确性,如步长和步宽,并测试在纵向环境监测中检测步态有意义变化的预测值的实用性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Video-Based Hand Pose Estimation for Remote Assessment of Bradykinesia in Parkinson's Disease Pose2Gait: Extracting Gait Features from Monocular Video of Individuals with Dementia Imputing Brain Measurements Across Data Sets via Graph Neural Networks Self-supervised Landmark Learning with Deformation Reconstruction and Cross-subject Consistency Objectives Self-supervised Few-shot Learning for Semantic Segmentation: An Annotation-free Approach
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1