增强自由生活跌倒风险评估:用于环境和地形分类的数据挖掘和深度学习

Jason Moore , Sam Stuart , Peter McMeekin , Richard Walker , Mina Nouredanesh , James Tung , Richard Reilly , Alan Godfrey
{"title":"增强自由生活跌倒风险评估:用于环境和地形分类的数据挖掘和深度学习","authors":"Jason Moore ,&nbsp;Sam Stuart ,&nbsp;Peter McMeekin ,&nbsp;Richard Walker ,&nbsp;Mina Nouredanesh ,&nbsp;James Tung ,&nbsp;Richard Reilly ,&nbsp;Alan Godfrey","doi":"10.1016/j.ibmed.2023.100103","DOIUrl":null,"url":null,"abstract":"<div><p>Fall risk assessment can be informed by understanding mobility/gait. Contemporary mobility analysis is being progressed by wearable inertial measurement units (IMU). Typically, IMUs gather temporal mobility-based outcomes (e.g., step time) from labs/clinics or beyond, capturing data for habitually informed fall risk. However, a thorough understanding of free-living IMU-based mobility is currently limited due to a lack of context. For example, although IMU-based length variability can be measured, no absolute clarity exists for factors relating to those variations, which could be due to an intrinsic or an extrinsic environmental factor. For a thorough understanding of habitual-based fall risk assessment through IMU-based mobility outcomes, use of wearable video cameras is suggested. However, investigating video data is laborious i.e., watching and manually labelling environments. Additionally, it raises ethical issues such as privacy. Accordingly, automated artificial intelligence (AI) approaches, that draw upon heterogenous datasets to accurately classify environments, are needed. Here, a novel dataset was created through mining online video and a deep learning-based tool was created via chained convolutional neural networks enabling automated environment (indoor or outdoor) and terrain (e.g., carpet, grass) classification. The dataset contained 146,624 video-based images (environment: 79,251, floor visible: 28,347, terrain: 39,026). Upon training each classifier, the system achieved F1-scores of ≥0.84 when tested on a manually labelled unseen validation dataset (environment: 0.98, floor visible indoor: 0.86, floor visible outdoor: 0.96, terrain indoor: 0.84, terrain outdoor: 0.95). Testing on new data resulted in accuracies from 51 to 100% for isolated networks and 45–90% for complete model. This work is ongoing with the underlying AI being refined for improved classification accuracies to aid automated contextual analysis of mobility/gait and subsequent fall risk. Ongoing work involves primary data capture from within participants free-living environments to bolster dataset heterogeneity.</p></div>","PeriodicalId":73399,"journal":{"name":"Intelligence-based medicine","volume":"8 ","pages":"Article 100103"},"PeriodicalIF":0.0000,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Toward enhanced free-living fall risk assessment: Data mining and deep learning for environment and terrain classification\",\"authors\":\"Jason Moore ,&nbsp;Sam Stuart ,&nbsp;Peter McMeekin ,&nbsp;Richard Walker ,&nbsp;Mina Nouredanesh ,&nbsp;James Tung ,&nbsp;Richard Reilly ,&nbsp;Alan Godfrey\",\"doi\":\"10.1016/j.ibmed.2023.100103\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Fall risk assessment can be informed by understanding mobility/gait. Contemporary mobility analysis is being progressed by wearable inertial measurement units (IMU). Typically, IMUs gather temporal mobility-based outcomes (e.g., step time) from labs/clinics or beyond, capturing data for habitually informed fall risk. However, a thorough understanding of free-living IMU-based mobility is currently limited due to a lack of context. For example, although IMU-based length variability can be measured, no absolute clarity exists for factors relating to those variations, which could be due to an intrinsic or an extrinsic environmental factor. For a thorough understanding of habitual-based fall risk assessment through IMU-based mobility outcomes, use of wearable video cameras is suggested. However, investigating video data is laborious i.e., watching and manually labelling environments. Additionally, it raises ethical issues such as privacy. Accordingly, automated artificial intelligence (AI) approaches, that draw upon heterogenous datasets to accurately classify environments, are needed. Here, a novel dataset was created through mining online video and a deep learning-based tool was created via chained convolutional neural networks enabling automated environment (indoor or outdoor) and terrain (e.g., carpet, grass) classification. The dataset contained 146,624 video-based images (environment: 79,251, floor visible: 28,347, terrain: 39,026). Upon training each classifier, the system achieved F1-scores of ≥0.84 when tested on a manually labelled unseen validation dataset (environment: 0.98, floor visible indoor: 0.86, floor visible outdoor: 0.96, terrain indoor: 0.84, terrain outdoor: 0.95). Testing on new data resulted in accuracies from 51 to 100% for isolated networks and 45–90% for complete model. This work is ongoing with the underlying AI being refined for improved classification accuracies to aid automated contextual analysis of mobility/gait and subsequent fall risk. Ongoing work involves primary data capture from within participants free-living environments to bolster dataset heterogeneity.</p></div>\",\"PeriodicalId\":73399,\"journal\":{\"name\":\"Intelligence-based medicine\",\"volume\":\"8 \",\"pages\":\"Article 100103\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Intelligence-based medicine\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2666521223000170\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Intelligence-based medicine","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2666521223000170","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

跌倒风险评估可以通过了解行动能力/步态来进行。可穿戴惯性测量单元(IMU)正在进行现代移动性分析。通常,IMU从实验室/诊所或其他地方收集基于时间流动性的结果(如步进时间),捕捉习惯性知情的跌倒风险数据。然而,由于缺乏背景,目前对基于IMU的自由生活流动性的全面了解有限。例如,尽管可以测量基于IMU的长度变化,但与这些变化相关的因素不存在绝对的清晰度,这些变化可能是由于内在或外在的环境因素造成的。为了通过基于IMU的行动结果全面了解基于习惯的跌倒风险评估,建议使用可穿戴摄像机。然而,调查视频数据是费力的,即观看和手动标记环境。此外,它还提出了隐私等道德问题。因此,需要利用异构数据集对环境进行准确分类的自动化人工智能(AI)方法。在这里,通过挖掘在线视频创建了一个新的数据集,并通过链式卷积神经网络创建了一种基于深度学习的工具,实现了环境(室内或室外)和地形(如地毯、草地)的自动化分类。该数据集包含146624幅基于视频的图像(环境:79251,地面可见:28347,地形:39026)。在训练每个分类器后,当在手动标记的不可见验证数据集(环境:0.98,室内可见地板:0.86,室外可见地板:0.96,室内地形:0.84,室外地形:0.95)上测试时,该系统获得了≥0.84的F1分数。在新数据上测试,孤立网络的准确率从51%到100%,完整模型的准确率为45-90%。这项工作正在进行中,底层人工智能正在进行改进,以提高分类精度,从而帮助对行动/步态和随后的跌倒风险进行自动化上下文分析。正在进行的工作包括从参与者的自由生活环境中获取主要数据,以增强数据集的异质性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Toward enhanced free-living fall risk assessment: Data mining and deep learning for environment and terrain classification

Fall risk assessment can be informed by understanding mobility/gait. Contemporary mobility analysis is being progressed by wearable inertial measurement units (IMU). Typically, IMUs gather temporal mobility-based outcomes (e.g., step time) from labs/clinics or beyond, capturing data for habitually informed fall risk. However, a thorough understanding of free-living IMU-based mobility is currently limited due to a lack of context. For example, although IMU-based length variability can be measured, no absolute clarity exists for factors relating to those variations, which could be due to an intrinsic or an extrinsic environmental factor. For a thorough understanding of habitual-based fall risk assessment through IMU-based mobility outcomes, use of wearable video cameras is suggested. However, investigating video data is laborious i.e., watching and manually labelling environments. Additionally, it raises ethical issues such as privacy. Accordingly, automated artificial intelligence (AI) approaches, that draw upon heterogenous datasets to accurately classify environments, are needed. Here, a novel dataset was created through mining online video and a deep learning-based tool was created via chained convolutional neural networks enabling automated environment (indoor or outdoor) and terrain (e.g., carpet, grass) classification. The dataset contained 146,624 video-based images (environment: 79,251, floor visible: 28,347, terrain: 39,026). Upon training each classifier, the system achieved F1-scores of ≥0.84 when tested on a manually labelled unseen validation dataset (environment: 0.98, floor visible indoor: 0.86, floor visible outdoor: 0.96, terrain indoor: 0.84, terrain outdoor: 0.95). Testing on new data resulted in accuracies from 51 to 100% for isolated networks and 45–90% for complete model. This work is ongoing with the underlying AI being refined for improved classification accuracies to aid automated contextual analysis of mobility/gait and subsequent fall risk. Ongoing work involves primary data capture from within participants free-living environments to bolster dataset heterogeneity.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Intelligence-based medicine
Intelligence-based medicine Health Informatics
CiteScore
5.00
自引率
0.00%
发文量
0
审稿时长
187 days
期刊最新文献
Artificial intelligence in child development monitoring: A systematic review on usage, outcomes and acceptance Automatic characterization of cerebral MRI images for the detection of autism spectrum disorders DOTnet 2.0: Deep learning network for diffuse optical tomography image reconstruction Artificial intelligence in child development monitoring: A systematic review on usage, outcomes and acceptance Clustering polycystic ovary syndrome laboratory results extracted from a large internet forum with machine learning
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1