{"title":"Plantar Pressure-Based Gait Recognition with and Without Carried Object by Convolutional Neural Network-Autoencoder Architecture.","authors":"Chin-Cheng Wu, Cheng-Wei Tsai, Fei-En Wu, Chi-Hsuan Chiang, Jin-Chern Chiou","doi":"10.3390/biomimetics10020079","DOIUrl":null,"url":null,"abstract":"<p><p>Convolutional neural networks (CNNs) have been widely and successfully demonstrated for closed set recognition in gait identification, but they still lack robustness in open set recognition for unknown classes. To improve the disadvantage, we proposed a convolutional neural network autoencoder (CNN-AE) architecture for user classification based on plantar pressure gait recognition. The model extracted gait features using pressure-sensitive mats, focusing on foot pressure distribution and foot size during walking. Preprocessing techniques, including region of interest (ROI) selection, feature image extraction, and data horizontal flipping, were utilized to establish a CNN model that assessed gait recognition accuracy under two conditions: without carried items and carrying a 500 g object. To extend the application of the CNN to open set recognition for unauthorized personnel, the proposed convolutional neural network-autoencoder (CNN-AE) architecture compressed the average foot pressure map into a 64-dimensional feature vector and facilitated identity determination based on the distances between these vectors. Among 60 participants, 48 were classified as authorized individuals and 12 as unauthorized. Under the condition of not carrying an object, an accuracy of 91.218%, precision of 93.676%, recall of 90.369%, and an F1-Score of 91.993% were achieved, indicating that the model successfully identified most actual positives. However, when carrying a 500 g object, the accuracy was 85.648%, precision was 94.459%, recall was 84.423%, and the F1-Score was 89.603%.</p>","PeriodicalId":8907,"journal":{"name":"Biomimetics","volume":"10 2","pages":""},"PeriodicalIF":3.4000,"publicationDate":"2025-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11853110/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Biomimetics","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.3390/biomimetics10020079","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0
Abstract
Convolutional neural networks (CNNs) have been widely and successfully demonstrated for closed set recognition in gait identification, but they still lack robustness in open set recognition for unknown classes. To improve the disadvantage, we proposed a convolutional neural network autoencoder (CNN-AE) architecture for user classification based on plantar pressure gait recognition. The model extracted gait features using pressure-sensitive mats, focusing on foot pressure distribution and foot size during walking. Preprocessing techniques, including region of interest (ROI) selection, feature image extraction, and data horizontal flipping, were utilized to establish a CNN model that assessed gait recognition accuracy under two conditions: without carried items and carrying a 500 g object. To extend the application of the CNN to open set recognition for unauthorized personnel, the proposed convolutional neural network-autoencoder (CNN-AE) architecture compressed the average foot pressure map into a 64-dimensional feature vector and facilitated identity determination based on the distances between these vectors. Among 60 participants, 48 were classified as authorized individuals and 12 as unauthorized. Under the condition of not carrying an object, an accuracy of 91.218%, precision of 93.676%, recall of 90.369%, and an F1-Score of 91.993% were achieved, indicating that the model successfully identified most actual positives. However, when carrying a 500 g object, the accuracy was 85.648%, precision was 94.459%, recall was 84.423%, and the F1-Score was 89.603%.