{"title":"基于多特征表示和周期性部件时间建模的步态识别","authors":"Zhenni Li, Shiqiang Li, Dong Xiao, Zhengmin Gu, Yue Yu","doi":"10.1007/s40747-023-01293-z","DOIUrl":null,"url":null,"abstract":"<p>Despite the ability of 3D convolutional methods to extract spatio-temporal information simultaneously, they also increase parameter redundancy and computational and storage costs. Previous work that has utilized the 2D convolution method has approached the problem in one of two ways: either using the entire body sequence as input to extract global features or dividing the body sequence into several parts to extract local features. However, global information tends to overlook detailed information specific to each body part, while local information fails to capture relationships between local regions. Therefore, this study proposes a new framework for constructing spatio-temporal representations, which involves extracting and fusing features in a novel manner. To achieve this, we introduce the multi-feature extraction-fusion (MFEF) module, which includes two branches: each branch extracts global features or local features individually, after which they are fused using multiple strategies. Additionally, as gait is a periodic action and different body parts contribute unequally to recognition during each cycle, we propose the periodic temporal feature modeling (PTFM) module, which extracts temporal features from adjacent frame parts during the complete gait cycle, based on the fused features. Furthermore, to capture fine-grained information specific to each body part, our framework utilizes multiple parallel PTFMs to correspond with each body part. We conducted a comprehensive experimental study on the widely used public dataset CASIA-B. Results indicate that the proposed approach achieved an average rank-1 accuracy of 97.2% in normal walking conditions, 92.3% while carrying a bag during walking, and 80.5% while wearing a jacket during walking.</p>","PeriodicalId":10524,"journal":{"name":"Complex & Intelligent Systems","volume":"1 1","pages":""},"PeriodicalIF":5.0000,"publicationDate":"2023-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Gait recognition based on multi-feature representation and temporal modeling of periodic parts\",\"authors\":\"Zhenni Li, Shiqiang Li, Dong Xiao, Zhengmin Gu, Yue Yu\",\"doi\":\"10.1007/s40747-023-01293-z\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Despite the ability of 3D convolutional methods to extract spatio-temporal information simultaneously, they also increase parameter redundancy and computational and storage costs. Previous work that has utilized the 2D convolution method has approached the problem in one of two ways: either using the entire body sequence as input to extract global features or dividing the body sequence into several parts to extract local features. However, global information tends to overlook detailed information specific to each body part, while local information fails to capture relationships between local regions. Therefore, this study proposes a new framework for constructing spatio-temporal representations, which involves extracting and fusing features in a novel manner. To achieve this, we introduce the multi-feature extraction-fusion (MFEF) module, which includes two branches: each branch extracts global features or local features individually, after which they are fused using multiple strategies. Additionally, as gait is a periodic action and different body parts contribute unequally to recognition during each cycle, we propose the periodic temporal feature modeling (PTFM) module, which extracts temporal features from adjacent frame parts during the complete gait cycle, based on the fused features. Furthermore, to capture fine-grained information specific to each body part, our framework utilizes multiple parallel PTFMs to correspond with each body part. We conducted a comprehensive experimental study on the widely used public dataset CASIA-B. Results indicate that the proposed approach achieved an average rank-1 accuracy of 97.2% in normal walking conditions, 92.3% while carrying a bag during walking, and 80.5% while wearing a jacket during walking.</p>\",\"PeriodicalId\":10524,\"journal\":{\"name\":\"Complex & Intelligent Systems\",\"volume\":\"1 1\",\"pages\":\"\"},\"PeriodicalIF\":5.0000,\"publicationDate\":\"2023-12-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Complex & Intelligent Systems\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1007/s40747-023-01293-z\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Complex & Intelligent Systems","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s40747-023-01293-z","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Gait recognition based on multi-feature representation and temporal modeling of periodic parts
Despite the ability of 3D convolutional methods to extract spatio-temporal information simultaneously, they also increase parameter redundancy and computational and storage costs. Previous work that has utilized the 2D convolution method has approached the problem in one of two ways: either using the entire body sequence as input to extract global features or dividing the body sequence into several parts to extract local features. However, global information tends to overlook detailed information specific to each body part, while local information fails to capture relationships between local regions. Therefore, this study proposes a new framework for constructing spatio-temporal representations, which involves extracting and fusing features in a novel manner. To achieve this, we introduce the multi-feature extraction-fusion (MFEF) module, which includes two branches: each branch extracts global features or local features individually, after which they are fused using multiple strategies. Additionally, as gait is a periodic action and different body parts contribute unequally to recognition during each cycle, we propose the periodic temporal feature modeling (PTFM) module, which extracts temporal features from adjacent frame parts during the complete gait cycle, based on the fused features. Furthermore, to capture fine-grained information specific to each body part, our framework utilizes multiple parallel PTFMs to correspond with each body part. We conducted a comprehensive experimental study on the widely used public dataset CASIA-B. Results indicate that the proposed approach achieved an average rank-1 accuracy of 97.2% in normal walking conditions, 92.3% while carrying a bag during walking, and 80.5% while wearing a jacket during walking.
期刊介绍:
Complex & Intelligent Systems aims to provide a forum for presenting and discussing novel approaches, tools and techniques meant for attaining a cross-fertilization between the broad fields of complex systems, computational simulation, and intelligent analytics and visualization. The transdisciplinary research that the journal focuses on will expand the boundaries of our understanding by investigating the principles and processes that underlie many of the most profound problems facing society today.