Yanan Wang, Shuichiro Haruta, Donghuo Zeng, Julio Vizcarra, Mori Kurokawa
{"title":"Multi-object event graph representation learning for Video Question Answering","authors":"Yanan Wang, Shuichiro Haruta, Donghuo Zeng, Julio Vizcarra, Mori Kurokawa","doi":"arxiv-2409.07747","DOIUrl":null,"url":null,"abstract":"Video question answering (VideoQA) is a task to predict the correct answer to\nquestions posed about a given video. The system must comprehend spatial and\ntemporal relationships among objects extracted from videos to perform causal\nand temporal reasoning. While prior works have focused on modeling individual\nobject movements using transformer-based methods, they falter when capturing\ncomplex scenarios involving multiple objects (e.g., \"a boy is throwing a ball\nin a hoop\"). We propose a contrastive language event graph representation\nlearning method called CLanG to address this limitation. Aiming to capture\nevent representations associated with multiple objects, our method employs a\nmulti-layer GNN-cluster module for adversarial graph representation learning,\nenabling contrastive learning between the question text and its relevant\nmulti-object event graph. Our method outperforms a strong baseline, achieving\nup to 2.2% higher accuracy on two challenging VideoQA datasets, NExT-QA and\nTGIF-QA-R. In particular, it is 2.8% better than baselines in handling causal\nand temporal questions, highlighting its strength in reasoning multiple\nobject-based events.","PeriodicalId":501030,"journal":{"name":"arXiv - CS - Computation and Language","volume":"24 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Computation and Language","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.07747","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Video question answering (VideoQA) is a task to predict the correct answer to
questions posed about a given video. The system must comprehend spatial and
temporal relationships among objects extracted from videos to perform causal
and temporal reasoning. While prior works have focused on modeling individual
object movements using transformer-based methods, they falter when capturing
complex scenarios involving multiple objects (e.g., "a boy is throwing a ball
in a hoop"). We propose a contrastive language event graph representation
learning method called CLanG to address this limitation. Aiming to capture
event representations associated with multiple objects, our method employs a
multi-layer GNN-cluster module for adversarial graph representation learning,
enabling contrastive learning between the question text and its relevant
multi-object event graph. Our method outperforms a strong baseline, achieving
up to 2.2% higher accuracy on two challenging VideoQA datasets, NExT-QA and
TGIF-QA-R. In particular, it is 2.8% better than baselines in handling causal
and temporal questions, highlighting its strength in reasoning multiple
object-based events.