{"title":"A New Head Pose Estimation Method Using Vision Transformer Model","authors":"Xufeng Ling, Dong Wang, Jie-Ci Yang","doi":"10.1145/3467707.3467729","DOIUrl":null,"url":null,"abstract":"In this paper, a self-attention-based Vision Transformer (VIT) method is introduced into estimate human head pose parameters. Firstly, the head pose image is divided into 32X32 patches, each image patch is regarded as a word, and the whole image is treated as a paragraph composed of n words by the VIT. Image recognition can be regarded as the semantic recognition of this paragraph. Next, we redesign the regression VIT to estimate the parameters. Then we select Head Pose Database as the training and validation dataset. The VIT is trained on the enhanced and normalized dataset. Finally, the trained VIT is used to regress the head pose parameters on testing samples. Experimental results show that VIT has high accuracy and good generalization ability for head pose estimation.","PeriodicalId":145582,"journal":{"name":"2021 7th International Conference on Computing and Artificial Intelligence","volume":"111 23","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 7th International Conference on Computing and Artificial Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3467707.3467729","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
In this paper, a self-attention-based Vision Transformer (VIT) method is introduced into estimate human head pose parameters. Firstly, the head pose image is divided into 32X32 patches, each image patch is regarded as a word, and the whole image is treated as a paragraph composed of n words by the VIT. Image recognition can be regarded as the semantic recognition of this paragraph. Next, we redesign the regression VIT to estimate the parameters. Then we select Head Pose Database as the training and validation dataset. The VIT is trained on the enhanced and normalized dataset. Finally, the trained VIT is used to regress the head pose parameters on testing samples. Experimental results show that VIT has high accuracy and good generalization ability for head pose estimation.