用于室内地形提取的深度学习语义分割:为自由生活可穿戴步态评估提供更好的信息

Jason Moore, S. Stuart, R. Walker, Peter McMeekin, F. Young, A. Godfrey
{"title":"用于室内地形提取的深度学习语义分割:为自由生活可穿戴步态评估提供更好的信息","authors":"Jason Moore, S. Stuart, R. Walker, Peter McMeekin, F. Young, A. Godfrey","doi":"10.1109/BSN56160.2022.9928505","DOIUrl":null,"url":null,"abstract":"Contemporary approaches to gait assessment use wearable within free-living environments to capture habitual information, which is more informative compared to data capture in the lab. Wearables range from inertial to camera-based technologies but pragmatic challenges such as analysis of big data from heterogenous environments exist. For example, wearable camera data often requires manual time-consuming subjective contextualization, such as labelling of terrain type. There is a need for the application of automated approaches such as those suggested by artificial intelligence (AI) based methods. This pilot study investigates multiple segmentation models and proposes use of the PSPNet deep learning network to automate a binary indoor floor segmentation mask for use with wearable camera-based data (i.e., video frames). To inform the development of the AI method, a unique approach of mining heterogenous data from a video sharing platform (YouTube) was adopted to provide independent training data. The dataset contains 1973 image frames and accompanying segmentation masks. When trained on the dataset the proposed model achieved an Instance over Union score of 0.73 over 25 epochs in complex environments. The proposed method will inform future work within the field of habitual free-living gait assessment to provide automated contextual information when used in conjunction with wearable inertial derived gait characteristics.Clinical Relevance—Processes developed here will aid automated video-based free-living gait assessment","PeriodicalId":150990,"journal":{"name":"2022 IEEE-EMBS International Conference on Wearable and Implantable Body Sensor Networks (BSN)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2022-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"Deep learning semantic segmentation for indoor terrain extraction: Toward better informing free-living wearable gait assessment\",\"authors\":\"Jason Moore, S. Stuart, R. Walker, Peter McMeekin, F. Young, A. Godfrey\",\"doi\":\"10.1109/BSN56160.2022.9928505\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Contemporary approaches to gait assessment use wearable within free-living environments to capture habitual information, which is more informative compared to data capture in the lab. Wearables range from inertial to camera-based technologies but pragmatic challenges such as analysis of big data from heterogenous environments exist. For example, wearable camera data often requires manual time-consuming subjective contextualization, such as labelling of terrain type. There is a need for the application of automated approaches such as those suggested by artificial intelligence (AI) based methods. This pilot study investigates multiple segmentation models and proposes use of the PSPNet deep learning network to automate a binary indoor floor segmentation mask for use with wearable camera-based data (i.e., video frames). To inform the development of the AI method, a unique approach of mining heterogenous data from a video sharing platform (YouTube) was adopted to provide independent training data. The dataset contains 1973 image frames and accompanying segmentation masks. When trained on the dataset the proposed model achieved an Instance over Union score of 0.73 over 25 epochs in complex environments. The proposed method will inform future work within the field of habitual free-living gait assessment to provide automated contextual information when used in conjunction with wearable inertial derived gait characteristics.Clinical Relevance—Processes developed here will aid automated video-based free-living gait assessment\",\"PeriodicalId\":150990,\"journal\":{\"name\":\"2022 IEEE-EMBS International Conference on Wearable and Implantable Body Sensor Networks (BSN)\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-09-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE-EMBS International Conference on Wearable and Implantable Body Sensor Networks (BSN)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/BSN56160.2022.9928505\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE-EMBS International Conference on Wearable and Implantable Body Sensor Networks (BSN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/BSN56160.2022.9928505","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

摘要

现代步态评估方法使用自由生活环境中的可穿戴设备来捕获习惯信息,与实验室中的数据捕获相比,这更具信息性。可穿戴设备的范围从惯性到基于摄像头的技术,但实际的挑战,如分析来自异质环境的大数据存在。例如,可穿戴相机数据通常需要人工耗时的主观情境化,例如标记地形类型。需要应用自动化方法,例如基于人工智能(AI)的方法所建议的方法。这项试点研究调查了多个分割模型,并提出使用PSPNet深度学习网络来自动实现基于可穿戴摄像头的数据(即视频帧)的二进制室内地板分割掩码。为了为AI方法的发展提供信息,采用了一种独特的方法,从视频共享平台(YouTube)中挖掘异构数据,以提供独立的训练数据。该数据集包含1973个图像帧和相应的分割掩码。当在数据集上进行训练时,所提出的模型在复杂环境中在25个epoch中获得了0.73的Instance over Union分数。所提出的方法将为习惯性自由生活步态评估领域的未来工作提供信息,当与可穿戴惯性衍生步态特征结合使用时,提供自动上下文信息。临床相关性-这里开发的程序将有助于基于视频的自动自由生活步态评估
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Deep learning semantic segmentation for indoor terrain extraction: Toward better informing free-living wearable gait assessment
Contemporary approaches to gait assessment use wearable within free-living environments to capture habitual information, which is more informative compared to data capture in the lab. Wearables range from inertial to camera-based technologies but pragmatic challenges such as analysis of big data from heterogenous environments exist. For example, wearable camera data often requires manual time-consuming subjective contextualization, such as labelling of terrain type. There is a need for the application of automated approaches such as those suggested by artificial intelligence (AI) based methods. This pilot study investigates multiple segmentation models and proposes use of the PSPNet deep learning network to automate a binary indoor floor segmentation mask for use with wearable camera-based data (i.e., video frames). To inform the development of the AI method, a unique approach of mining heterogenous data from a video sharing platform (YouTube) was adopted to provide independent training data. The dataset contains 1973 image frames and accompanying segmentation masks. When trained on the dataset the proposed model achieved an Instance over Union score of 0.73 over 25 epochs in complex environments. The proposed method will inform future work within the field of habitual free-living gait assessment to provide automated contextual information when used in conjunction with wearable inertial derived gait characteristics.Clinical Relevance—Processes developed here will aid automated video-based free-living gait assessment
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
SixthSense: Smart Integrated Extreme Environment Health Monitor with Sensory Feedback for Enhanced Situation Awareness Real-Time Breathing Phase Detection Using Earbuds Microphone Finite Element Modeling of a Pressure Ulcers Preventive Bed for Neonates Prototype smartwatch device for prolonged physiological monitoring in remote environments Multimodal Time-Series Activity Forecasting for Adaptive Lifestyle Intervention Design
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1