Enhancing fall risk assessment: instrumenting vision with deep learning during walks.

IF 5.2 2区 医学 Q1 ENGINEERING, BIOMEDICAL Journal of NeuroEngineering and Rehabilitation Pub Date : 2024-06-22 DOI:10.1186/s12984-024-01400-2
Jason Moore, Robert Catena, Lisa Fournier, Pegah Jamali, Peter McMeekin, Samuel Stuart, Richard Walker, Thomas Salisbury, Alan Godfrey
{"title":"Enhancing fall risk assessment: instrumenting vision with deep learning during walks.","authors":"Jason Moore, Robert Catena, Lisa Fournier, Pegah Jamali, Peter McMeekin, Samuel Stuart, Richard Walker, Thomas Salisbury, Alan Godfrey","doi":"10.1186/s12984-024-01400-2","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Falls are common in a range of clinical cohorts, where routine risk assessment often comprises subjective visual observation only. Typically, observational assessment involves evaluation of an individual's gait during scripted walking protocols within a lab to identify deficits that potentially increase fall risk, but subtle deficits may not be (readily) observable. Therefore, objective approaches (e.g., inertial measurement units, IMUs) are useful for quantifying high resolution gait characteristics, enabling more informed fall risk assessment by capturing subtle deficits. However, IMU-based gait instrumentation alone is limited, failing to consider participant behaviour and details within the environment (e.g., obstacles). Video-based eye-tracking glasses may provide additional insight to fall risk, clarifying how people traverse environments based on head and eye movements. Recording head and eye movements can provide insights into how the allocation of visual attention to environmental stimuli influences successful navigation around obstacles. Yet, manual review of video data to evaluate head and eye movements is time-consuming and subjective. An automated approach is needed but none currently exists. This paper proposes a deep learning-based object detection algorithm (VARFA) to instrument vision and video data during walks, complementing instrumented gait.</p><p><strong>Method: </strong>The approach automatically labels video data captured in a gait lab to assess visual attention and details of the environment. The proposed algorithm uses a YoloV8 model trained on with a novel lab-based dataset.</p><p><strong>Results: </strong>VARFA achieved excellent evaluation metrics (0.93 mAP50), identifying, and localizing static objects (e.g., obstacles in the walking path) with an average accuracy of 93%. Similarly, a U-NET based track/path segmentation model achieved good metrics (IoU 0.82), suggesting that the predicted tracks (i.e., walking paths) align closely with the actual track, with an overlap of 82%. Notably, both models achieved these metrics while processing at real-time speeds, demonstrating efficiency and effectiveness for pragmatic applications.</p><p><strong>Conclusion: </strong>The instrumented approach improves the efficiency and accuracy of fall risk assessment by evaluating the visual allocation of attention (i.e., information about when and where a person is attending) during navigation, improving the breadth of instrumentation in this area. Use of VARFA to instrument vision could be used to better inform fall risk assessment by providing behaviour and context data to complement instrumented e.g., IMU data during gait tasks. That may have notable (e.g., personalized) rehabilitation implications across a wide range of clinical cohorts where poor gait and increased fall risk are common.</p>","PeriodicalId":16384,"journal":{"name":"Journal of NeuroEngineering and Rehabilitation","volume":null,"pages":null},"PeriodicalIF":5.2000,"publicationDate":"2024-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11193231/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of NeuroEngineering and Rehabilitation","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1186/s12984-024-01400-2","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
引用次数: 0

Abstract

Background: Falls are common in a range of clinical cohorts, where routine risk assessment often comprises subjective visual observation only. Typically, observational assessment involves evaluation of an individual's gait during scripted walking protocols within a lab to identify deficits that potentially increase fall risk, but subtle deficits may not be (readily) observable. Therefore, objective approaches (e.g., inertial measurement units, IMUs) are useful for quantifying high resolution gait characteristics, enabling more informed fall risk assessment by capturing subtle deficits. However, IMU-based gait instrumentation alone is limited, failing to consider participant behaviour and details within the environment (e.g., obstacles). Video-based eye-tracking glasses may provide additional insight to fall risk, clarifying how people traverse environments based on head and eye movements. Recording head and eye movements can provide insights into how the allocation of visual attention to environmental stimuli influences successful navigation around obstacles. Yet, manual review of video data to evaluate head and eye movements is time-consuming and subjective. An automated approach is needed but none currently exists. This paper proposes a deep learning-based object detection algorithm (VARFA) to instrument vision and video data during walks, complementing instrumented gait.

Method: The approach automatically labels video data captured in a gait lab to assess visual attention and details of the environment. The proposed algorithm uses a YoloV8 model trained on with a novel lab-based dataset.

Results: VARFA achieved excellent evaluation metrics (0.93 mAP50), identifying, and localizing static objects (e.g., obstacles in the walking path) with an average accuracy of 93%. Similarly, a U-NET based track/path segmentation model achieved good metrics (IoU 0.82), suggesting that the predicted tracks (i.e., walking paths) align closely with the actual track, with an overlap of 82%. Notably, both models achieved these metrics while processing at real-time speeds, demonstrating efficiency and effectiveness for pragmatic applications.

Conclusion: The instrumented approach improves the efficiency and accuracy of fall risk assessment by evaluating the visual allocation of attention (i.e., information about when and where a person is attending) during navigation, improving the breadth of instrumentation in this area. Use of VARFA to instrument vision could be used to better inform fall risk assessment by providing behaviour and context data to complement instrumented e.g., IMU data during gait tasks. That may have notable (e.g., personalized) rehabilitation implications across a wide range of clinical cohorts where poor gait and increased fall risk are common.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
加强跌倒风险评估:在行走过程中利用深度学习辅助视觉。
背景:跌倒在一系列临床群组中很常见,常规风险评估通常只包括主观视觉观察。通常情况下,观察评估包括在实验室内对个人的步态进行评估,以确定可能会增加跌倒风险的缺陷,但细微的缺陷可能无法(轻易)观察到。因此,客观方法(如惯性测量单元,IMU)可用于量化高分辨率步态特征,通过捕捉细微的缺陷来进行更明智的跌倒风险评估。然而,仅靠基于惯性测量单元的步态仪器是有限的,无法考虑参与者的行为和环境中的细节(如障碍物)。基于视频的眼动跟踪眼镜可以为跌倒风险提供额外的洞察力,根据头部和眼部的运动来阐明人们是如何穿越环境的。记录头部和眼部的运动可以帮助人们了解视觉注意力对环境刺激的分配是如何影响成功绕过障碍物的。然而,手动查看视频数据以评估头部和眼部运动既费时又主观。我们需要一种自动方法,但目前还没有。本文提出了一种基于深度学习的物体检测算法(VARFA),用于检测步行过程中的视觉和视频数据,对仪器步态进行补充:该方法可自动标注步态实验室捕获的视频数据,以评估视觉注意力和环境细节。方法:该方法可自动标注步态实验室捕获的视频数据,以评估视觉注意力和环境细节。提议的算法使用 YoloV8 模型和基于实验室的新数据集进行训练:VARFA 的评估指标非常出色(0.93 mAP50),识别和定位静态物体(如行走路径中的障碍物)的平均准确率达到 93%。同样,基于 U-NET 的轨迹/路径分割模型也取得了良好的指标(IoU 0.82),表明预测的轨迹(即行走路径)与实际轨迹非常吻合,重叠率高达 82%。值得注意的是,这两个模型在以实时速度处理时都达到了这些指标,证明了实际应用的效率和有效性:该仪器方法通过评估导航过程中注意力的视觉分配(即人在何时何地关注的信息),提高了跌倒风险评估的效率和准确性,改进了该领域仪器的应用范围。使用 VARFA 检测视觉可提供行为和环境数据,补充步态任务中的仪器数据(如 IMU 数据),从而更好地为跌倒风险评估提供信息。这可能会对步态不良和跌倒风险增加的各种临床群体产生显著的(如个性化的)康复影响。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Journal of NeuroEngineering and Rehabilitation
Journal of NeuroEngineering and Rehabilitation 工程技术-工程:生物医学
CiteScore
9.60
自引率
3.90%
发文量
122
审稿时长
24 months
期刊介绍: Journal of NeuroEngineering and Rehabilitation considers manuscripts on all aspects of research that result from cross-fertilization of the fields of neuroscience, biomedical engineering, and physical medicine & rehabilitation.
期刊最新文献
Comparison of synergy extrapolation and static optimization for estimating multiple unmeasured muscle activations during walking. Immersive virtual reality for learning exoskeleton-like virtual walking: a feasibility study. Instrumented assessment of lower and upper motor neuron signs in amyotrophic lateral sclerosis using robotic manipulation: an explorative study. Rest the brain to learn new gait patterns after stroke. Effects of virtual reality rehabilitation after spinal cord injury: a systematic review and meta-analysis.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1