Narayana Darapaneni, A. Paduri, Dinu Thomas, Jisha C U, Abhinao Shrivastava, Seema Biradar
{"title":"视频理解:通过自我关注可学习的关键描述符标记视频","authors":"Narayana Darapaneni, A. Paduri, Dinu Thomas, Jisha C U, Abhinao Shrivastava, Seema Biradar","doi":"10.1109/ESDC56251.2023.10149869","DOIUrl":null,"url":null,"abstract":"In today’s world, the UGC (User Generated Contents) videos have increased exponentially. Billions of videos are uploaded, played and exchanged between different actors. In this context, automatic video content classification has become a critical and challenging problem, especially in areas like video-based search, recommendation etc. In this work we try to extract frame-level visual and audio features, pre-extracted features are then converted into a compact video level representation effectively and efficiently. We aim to classify the video into a set of categories with high accuracy. From the literature survey, we identified that, the tagging of videos has been a problem which has not reached its maturity yet, and there are many researches happening in this area. It is observed that, the clustering based video description methodologies show a better result compared to the temporal algorithms. We also have identified that, majority of the SOTA techniques use the VLAD (Vector of Locally Aggregated Descriptors) technique to extract the video features and make the codebook learnable through some adjustments introduced in the NetVLAD. The key descriptors would be mostly noisy, and many of them are insignificant. In this work we aim to cascade a Self-Attention Block on the NetVLAD which can extract the significant descriptors and filter out the Noise. The YouTube 8M dataset shall be used for training the model and performance will be compared with other SOTA techniques. Like other similar works, model performance will be measured by GAP Metric (Global Average Precision) for all the videos predicted labels. We aim to achieve a GAP score close to 85% for this work.","PeriodicalId":354855,"journal":{"name":"2023 11th International Symposium on Electronic Systems Devices and Computing (ESDC)","volume":"65 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Video understanding : Tagging of videos through self attentive learnable key descriptors\",\"authors\":\"Narayana Darapaneni, A. Paduri, Dinu Thomas, Jisha C U, Abhinao Shrivastava, Seema Biradar\",\"doi\":\"10.1109/ESDC56251.2023.10149869\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In today’s world, the UGC (User Generated Contents) videos have increased exponentially. Billions of videos are uploaded, played and exchanged between different actors. In this context, automatic video content classification has become a critical and challenging problem, especially in areas like video-based search, recommendation etc. In this work we try to extract frame-level visual and audio features, pre-extracted features are then converted into a compact video level representation effectively and efficiently. We aim to classify the video into a set of categories with high accuracy. From the literature survey, we identified that, the tagging of videos has been a problem which has not reached its maturity yet, and there are many researches happening in this area. It is observed that, the clustering based video description methodologies show a better result compared to the temporal algorithms. We also have identified that, majority of the SOTA techniques use the VLAD (Vector of Locally Aggregated Descriptors) technique to extract the video features and make the codebook learnable through some adjustments introduced in the NetVLAD. The key descriptors would be mostly noisy, and many of them are insignificant. In this work we aim to cascade a Self-Attention Block on the NetVLAD which can extract the significant descriptors and filter out the Noise. The YouTube 8M dataset shall be used for training the model and performance will be compared with other SOTA techniques. Like other similar works, model performance will be measured by GAP Metric (Global Average Precision) for all the videos predicted labels. We aim to achieve a GAP score close to 85% for this work.\",\"PeriodicalId\":354855,\"journal\":{\"name\":\"2023 11th International Symposium on Electronic Systems Devices and Computing (ESDC)\",\"volume\":\"65 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-05-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 11th International Symposium on Electronic Systems Devices and Computing (ESDC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ESDC56251.2023.10149869\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 11th International Symposium on Electronic Systems Devices and Computing (ESDC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ESDC56251.2023.10149869","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Video understanding : Tagging of videos through self attentive learnable key descriptors
In today’s world, the UGC (User Generated Contents) videos have increased exponentially. Billions of videos are uploaded, played and exchanged between different actors. In this context, automatic video content classification has become a critical and challenging problem, especially in areas like video-based search, recommendation etc. In this work we try to extract frame-level visual and audio features, pre-extracted features are then converted into a compact video level representation effectively and efficiently. We aim to classify the video into a set of categories with high accuracy. From the literature survey, we identified that, the tagging of videos has been a problem which has not reached its maturity yet, and there are many researches happening in this area. It is observed that, the clustering based video description methodologies show a better result compared to the temporal algorithms. We also have identified that, majority of the SOTA techniques use the VLAD (Vector of Locally Aggregated Descriptors) technique to extract the video features and make the codebook learnable through some adjustments introduced in the NetVLAD. The key descriptors would be mostly noisy, and many of them are insignificant. In this work we aim to cascade a Self-Attention Block on the NetVLAD which can extract the significant descriptors and filter out the Noise. The YouTube 8M dataset shall be used for training the model and performance will be compared with other SOTA techniques. Like other similar works, model performance will be measured by GAP Metric (Global Average Precision) for all the videos predicted labels. We aim to achieve a GAP score close to 85% for this work.