{"title":"SynopGround: A Large-Scale Dataset for Multi-Paragraph Video Grounding from TV Dramas and Synopses","authors":"Chaolei Tan, Zihang Lin, Junfu Pu, Zhongang Qi, Wei-Yi Pei, Zhi Qu, Yexin Wang, Ying Shan, Wei-Shi Zheng, Jian-Fang Hu","doi":"arxiv-2408.01669","DOIUrl":null,"url":null,"abstract":"Video grounding is a fundamental problem in multimodal content understanding,\naiming to localize specific natural language queries in an untrimmed video.\nHowever, current video grounding datasets merely focus on simple events and are\neither limited to shorter videos or brief sentences, which hinders the model\nfrom evolving toward stronger multimodal understanding capabilities. To address\nthese limitations, we present a large-scale video grounding dataset named\nSynopGround, in which more than 2800 hours of videos are sourced from popular\nTV dramas and are paired with accurately localized human-written synopses. Each\nparagraph in the synopsis serves as a language query and is manually annotated\nwith precise temporal boundaries in the long video. These paragraph queries are\ntightly correlated to each other and contain a wealth of abstract expressions\nsummarizing video storylines and specific descriptions portraying event\ndetails, which enables the model to learn multimodal perception on more\nintricate concepts over longer context dependencies. Based on the dataset, we\nfurther introduce a more complex setting of video grounding dubbed\nMulti-Paragraph Video Grounding (MPVG), which takes as input multiple\nparagraphs and a long video for grounding each paragraph query to its temporal\ninterval. In addition, we propose a novel Local-Global Multimodal Reasoner\n(LGMR) to explicitly model the local-global structures of long-term multimodal\ninputs for MPVG. Our method provides an effective baseline solution to the\nmulti-paragraph video grounding problem. Extensive experiments verify the\nproposed model's effectiveness as well as its superiority in long-term\nmulti-paragraph video grounding over prior state-of-the-arts. Dataset and code\nare publicly available. Project page: https://synopground.github.io/.","PeriodicalId":501480,"journal":{"name":"arXiv - CS - Multimedia","volume":"93 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-08-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Multimedia","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2408.01669","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Video grounding is a fundamental problem in multimodal content understanding,
aiming to localize specific natural language queries in an untrimmed video.
However, current video grounding datasets merely focus on simple events and are
either limited to shorter videos or brief sentences, which hinders the model
from evolving toward stronger multimodal understanding capabilities. To address
these limitations, we present a large-scale video grounding dataset named
SynopGround, in which more than 2800 hours of videos are sourced from popular
TV dramas and are paired with accurately localized human-written synopses. Each
paragraph in the synopsis serves as a language query and is manually annotated
with precise temporal boundaries in the long video. These paragraph queries are
tightly correlated to each other and contain a wealth of abstract expressions
summarizing video storylines and specific descriptions portraying event
details, which enables the model to learn multimodal perception on more
intricate concepts over longer context dependencies. Based on the dataset, we
further introduce a more complex setting of video grounding dubbed
Multi-Paragraph Video Grounding (MPVG), which takes as input multiple
paragraphs and a long video for grounding each paragraph query to its temporal
interval. In addition, we propose a novel Local-Global Multimodal Reasoner
(LGMR) to explicitly model the local-global structures of long-term multimodal
inputs for MPVG. Our method provides an effective baseline solution to the
multi-paragraph video grounding problem. Extensive experiments verify the
proposed model's effectiveness as well as its superiority in long-term
multi-paragraph video grounding over prior state-of-the-arts. Dataset and code
are publicly available. Project page: https://synopground.github.io/.