{"title":"Syntactically and semantically enhanced captioning network via hybrid attention and POS tagging prompt","authors":"Deepali Verma, Tanima Dutta","doi":"10.1016/j.cviu.2025.104340","DOIUrl":null,"url":null,"abstract":"<div><div>Video captioning has become a thriving research area, with current methods relying on static visuals or motion information. However, videos contain a complex interplay between multiple objects with unique temporal patterns. Traditional techniques struggle to capture this intricate connection, leading to inaccurate captions due to the gap between video features and generated text. Analyzing these temporal variations and identifying relevant objects remains a challenge. This paper proposes SySCapNet, a novel deep-learning architecture for video captioning, designed to address this limitation. SySCapNet effectively captures objects involved in motions and extracts spatio-temporal action features. This information, along with visual features and motion data, guides the caption generation process. We introduce a groundbreaking hybrid attention module that leverages both visual saliency and spatio-temporal dynamics to extract highly detailed and semantically meaningful features. Furthermore, we incorporate part-of-speech tagging to guide the network in disambiguating words and understanding their grammatical roles. Extensive evaluations on benchmark datasets demonstrate that SySCapNet achieves superior performance compared to existing methods. The generated captions are not only informative but also grammatically correct and rich in context, surpassing the limitations of basic AI descriptions.</div></div>","PeriodicalId":50633,"journal":{"name":"Computer Vision and Image Understanding","volume":"255 ","pages":"Article 104340"},"PeriodicalIF":4.3000,"publicationDate":"2025-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Vision and Image Understanding","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1077314225000633","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Video captioning has become a thriving research area, with current methods relying on static visuals or motion information. However, videos contain a complex interplay between multiple objects with unique temporal patterns. Traditional techniques struggle to capture this intricate connection, leading to inaccurate captions due to the gap between video features and generated text. Analyzing these temporal variations and identifying relevant objects remains a challenge. This paper proposes SySCapNet, a novel deep-learning architecture for video captioning, designed to address this limitation. SySCapNet effectively captures objects involved in motions and extracts spatio-temporal action features. This information, along with visual features and motion data, guides the caption generation process. We introduce a groundbreaking hybrid attention module that leverages both visual saliency and spatio-temporal dynamics to extract highly detailed and semantically meaningful features. Furthermore, we incorporate part-of-speech tagging to guide the network in disambiguating words and understanding their grammatical roles. Extensive evaluations on benchmark datasets demonstrate that SySCapNet achieves superior performance compared to existing methods. The generated captions are not only informative but also grammatically correct and rich in context, surpassing the limitations of basic AI descriptions.
期刊介绍:
The central focus of this journal is the computer analysis of pictorial information. Computer Vision and Image Understanding publishes papers covering all aspects of image analysis from the low-level, iconic processes of early vision to the high-level, symbolic processes of recognition and interpretation. A wide range of topics in the image understanding area is covered, including papers offering insights that differ from predominant views.
Research Areas Include:
• Theory
• Early vision
• Data structures and representations
• Shape
• Range
• Motion
• Matching and recognition
• Architecture and languages
• Vision systems