Israel Raul Tiñini Alvarez, Guillermo Sahonero-Alvarez, Carlos Menacho, Josmar Suarez
{"title":"Exploring Edge Computing for Gait Recognition","authors":"Israel Raul Tiñini Alvarez, Guillermo Sahonero-Alvarez, Carlos Menacho, Josmar Suarez","doi":"10.1109/BioSMART54244.2021.9677840","DOIUrl":null,"url":null,"abstract":"Gait Recognition, as a way to identify people, is re-markably attractive for scenarios in which it is not possible to rely on subjects' collaboration. Nevertheless, from all the modalities that Gait Recognition involve, vision-based approaches are better to meet hardware and settings-limitations. Because of that, in the past years, there has been several efforts on developing robust algorithms against visual gait covariates, i.e., view, clothing and carrying variations. However, besides robustness, real-world gait recognition systems also require to be implemented considering near real-time computational demands as well as portability. In this work we propose an Edge Computing approach based on the NVIDIA Jetson Nano development board and the OpenCV OAK-D camera to perform Gait Recognition. To adapt our approach, we created two small data sets that allowed our system to particularize the system to local data. Our pipeline implies the usage of a pre-trained object detection algorithm in the OAK-D, and the execution of both the representation extraction and inference on the Jetson Nano. To test our framework, we first explore its feasibility and consistency in an offline manner. Later, we characterize the complexity and time processing when executing the procedures in an online setup. Our results show that the approach is promising as it allows online operation with an inference time of 35.8 ms.","PeriodicalId":286026,"journal":{"name":"2021 4th International Conference on Bio-Engineering for Smart Technologies (BioSMART)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 4th International Conference on Bio-Engineering for Smart Technologies (BioSMART)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/BioSMART54244.2021.9677840","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Gait Recognition, as a way to identify people, is re-markably attractive for scenarios in which it is not possible to rely on subjects' collaboration. Nevertheless, from all the modalities that Gait Recognition involve, vision-based approaches are better to meet hardware and settings-limitations. Because of that, in the past years, there has been several efforts on developing robust algorithms against visual gait covariates, i.e., view, clothing and carrying variations. However, besides robustness, real-world gait recognition systems also require to be implemented considering near real-time computational demands as well as portability. In this work we propose an Edge Computing approach based on the NVIDIA Jetson Nano development board and the OpenCV OAK-D camera to perform Gait Recognition. To adapt our approach, we created two small data sets that allowed our system to particularize the system to local data. Our pipeline implies the usage of a pre-trained object detection algorithm in the OAK-D, and the execution of both the representation extraction and inference on the Jetson Nano. To test our framework, we first explore its feasibility and consistency in an offline manner. Later, we characterize the complexity and time processing when executing the procedures in an online setup. Our results show that the approach is promising as it allows online operation with an inference time of 35.8 ms.