{"title":"基于可穿戴惯性传感器的行人行走方向估计的计算机视觉方法:PatternNet","authors":"Hanyuan Fu, Thomas Bonis, V. Renaudin, Ni Zhu","doi":"10.1109/PLANS53410.2023.10140028","DOIUrl":null,"url":null,"abstract":"In this paper, we propose an image-based neural network approach (PatternNet) for walking direction estimation with wearable inertial sensors. Gait event segmentation and projection are used to convert the inertial signals to image-like tabular samples, from which a Convolutional neural network (CNN) extracts geometrical features for walking direction inference. To embrace the diversity of individual walking characteristics and different ways to carry the device, tailor-made models are constructed based on individual users' gait characteristics and the device-carrying mode. Experimental assessments of the proposed method and a competing method (RoNIN) are carried out in real-life situations and over 3 km total walking distance, covering indoor and outdoor environments, involving both sighted and visually impaired volunteers carrying the device in three different ways: texting, swinging and in a jacket pocket. PatternNet estimates the walking directions with a mean accuracy between 7 to 10 degrees for the three test persons and is 1.5 times better than RONIN estimates.","PeriodicalId":344794,"journal":{"name":"2023 IEEE/ION Position, Location and Navigation Symposium (PLANS)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2023-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A Computer Vision Approach for Pedestrian Walking Direction Estimation with Wearable Inertial Sensors: PatternNet\",\"authors\":\"Hanyuan Fu, Thomas Bonis, V. Renaudin, Ni Zhu\",\"doi\":\"10.1109/PLANS53410.2023.10140028\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this paper, we propose an image-based neural network approach (PatternNet) for walking direction estimation with wearable inertial sensors. Gait event segmentation and projection are used to convert the inertial signals to image-like tabular samples, from which a Convolutional neural network (CNN) extracts geometrical features for walking direction inference. To embrace the diversity of individual walking characteristics and different ways to carry the device, tailor-made models are constructed based on individual users' gait characteristics and the device-carrying mode. Experimental assessments of the proposed method and a competing method (RoNIN) are carried out in real-life situations and over 3 km total walking distance, covering indoor and outdoor environments, involving both sighted and visually impaired volunteers carrying the device in three different ways: texting, swinging and in a jacket pocket. PatternNet estimates the walking directions with a mean accuracy between 7 to 10 degrees for the three test persons and is 1.5 times better than RONIN estimates.\",\"PeriodicalId\":344794,\"journal\":{\"name\":\"2023 IEEE/ION Position, Location and Navigation Symposium (PLANS)\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-04-24\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 IEEE/ION Position, Location and Navigation Symposium (PLANS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/PLANS53410.2023.10140028\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE/ION Position, Location and Navigation Symposium (PLANS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/PLANS53410.2023.10140028","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A Computer Vision Approach for Pedestrian Walking Direction Estimation with Wearable Inertial Sensors: PatternNet
In this paper, we propose an image-based neural network approach (PatternNet) for walking direction estimation with wearable inertial sensors. Gait event segmentation and projection are used to convert the inertial signals to image-like tabular samples, from which a Convolutional neural network (CNN) extracts geometrical features for walking direction inference. To embrace the diversity of individual walking characteristics and different ways to carry the device, tailor-made models are constructed based on individual users' gait characteristics and the device-carrying mode. Experimental assessments of the proposed method and a competing method (RoNIN) are carried out in real-life situations and over 3 km total walking distance, covering indoor and outdoor environments, involving both sighted and visually impaired volunteers carrying the device in three different ways: texting, swinging and in a jacket pocket. PatternNet estimates the walking directions with a mean accuracy between 7 to 10 degrees for the three test persons and is 1.5 times better than RONIN estimates.