{"title":"基于三维卷积和注意机制的先验知识引导分层动作质量评估","authors":"Haoyang Zhou, Teng Hou, Jitao Li","doi":"10.1088/1742-6596/2632/1/012027","DOIUrl":null,"url":null,"abstract":"Abstract Recently, there has been a growing interest in the field of computer vision and deep learning regarding a newly emerging problem known as action quality assessment (AQA). However, most researchers still rely on the traditional approach of using models from the video action recognition field. Unfortunately, this approach overlooks crucial features in AQA, such as movement fluency and degree of completion. Alternatively, some researchers have employed the transformer paradigm to capture action details and overall action integrity, but the high computational cost associated with transformers makes them impractical for real-time tasks. Due to the diversity of action types, it is challenging to rely solely on a shared model for quality assessment of various types of actions. To address these issues, we propose a novel network structure for AQA, which is the first to integrate multi-model capabilities through a classification model. Specifically, we utilize a pre-trained I3D model equipped with a self-attention block for classification. This allows us to evaluate various categories of actions using just one model. Furthermore, we introduce self-attention mechanisms and multi-head attention into the traditional convolutional neural network. By systematically replacing the last few layers of the conventional convolutional network, our model gains a greater ability to sense the global coordination of different actions. We have verified the effectiveness of our approach on the AQA-7 dataset. In comparison to other popular models, our model achieves satisfactory performance while maintaining a low computational cost.","PeriodicalId":44008,"journal":{"name":"Journal of Physics-Photonics","volume":"97 2","pages":"0"},"PeriodicalIF":4.6000,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Prior Knowledge-guided Hierarchical Action Quality Assessment with 3D Convolution and Attention Mechanism\",\"authors\":\"Haoyang Zhou, Teng Hou, Jitao Li\",\"doi\":\"10.1088/1742-6596/2632/1/012027\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Abstract Recently, there has been a growing interest in the field of computer vision and deep learning regarding a newly emerging problem known as action quality assessment (AQA). However, most researchers still rely on the traditional approach of using models from the video action recognition field. Unfortunately, this approach overlooks crucial features in AQA, such as movement fluency and degree of completion. Alternatively, some researchers have employed the transformer paradigm to capture action details and overall action integrity, but the high computational cost associated with transformers makes them impractical for real-time tasks. Due to the diversity of action types, it is challenging to rely solely on a shared model for quality assessment of various types of actions. To address these issues, we propose a novel network structure for AQA, which is the first to integrate multi-model capabilities through a classification model. Specifically, we utilize a pre-trained I3D model equipped with a self-attention block for classification. This allows us to evaluate various categories of actions using just one model. Furthermore, we introduce self-attention mechanisms and multi-head attention into the traditional convolutional neural network. By systematically replacing the last few layers of the conventional convolutional network, our model gains a greater ability to sense the global coordination of different actions. We have verified the effectiveness of our approach on the AQA-7 dataset. In comparison to other popular models, our model achieves satisfactory performance while maintaining a low computational cost.\",\"PeriodicalId\":44008,\"journal\":{\"name\":\"Journal of Physics-Photonics\",\"volume\":\"97 2\",\"pages\":\"0\"},\"PeriodicalIF\":4.6000,\"publicationDate\":\"2023-11-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Physics-Photonics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1088/1742-6596/2632/1/012027\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"OPTICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Physics-Photonics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1088/1742-6596/2632/1/012027","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"OPTICS","Score":null,"Total":0}
Prior Knowledge-guided Hierarchical Action Quality Assessment with 3D Convolution and Attention Mechanism
Abstract Recently, there has been a growing interest in the field of computer vision and deep learning regarding a newly emerging problem known as action quality assessment (AQA). However, most researchers still rely on the traditional approach of using models from the video action recognition field. Unfortunately, this approach overlooks crucial features in AQA, such as movement fluency and degree of completion. Alternatively, some researchers have employed the transformer paradigm to capture action details and overall action integrity, but the high computational cost associated with transformers makes them impractical for real-time tasks. Due to the diversity of action types, it is challenging to rely solely on a shared model for quality assessment of various types of actions. To address these issues, we propose a novel network structure for AQA, which is the first to integrate multi-model capabilities through a classification model. Specifically, we utilize a pre-trained I3D model equipped with a self-attention block for classification. This allows us to evaluate various categories of actions using just one model. Furthermore, we introduce self-attention mechanisms and multi-head attention into the traditional convolutional neural network. By systematically replacing the last few layers of the conventional convolutional network, our model gains a greater ability to sense the global coordination of different actions. We have verified the effectiveness of our approach on the AQA-7 dataset. In comparison to other popular models, our model achieves satisfactory performance while maintaining a low computational cost.