Shihui Zhang , Houlin Wang , Lei Wang , Xueqiang Han , Qing Tian
{"title":"CGCN:用于少量时间动作定位的上下文图卷积网络","authors":"Shihui Zhang , Houlin Wang , Lei Wang , Xueqiang Han , Qing Tian","doi":"10.1016/j.ipm.2024.103926","DOIUrl":null,"url":null,"abstract":"<div><div>Localizing human actions in videos has attracted extensive attention from industry and academia. Few-Shot Temporal Action Localization (FS-TAL) aims to detect human actions in untrimmed videos using a limited number of training samples. Existing FS-TAL methods usually ignore the semantic context between video snippets, making it difficult to detect actions during the query process. In this paper, we propose a novel FS-TAL method named Context Graph Convolutional Network (CGCN) which employs multi-scale graph convolution to aggregate semantic context between video snippets in addition to exploiting their temporal context. Specifically, CGCN constructs a graph for each scale of a video, where each video snippet is a node, and the relationships between the snippets are edges. There are three types of edges, namely sequence edges, intra-action edges, and inter-action edges. CGCN establishes sequence edges to enhance temporal expression. Intra-action edges utilize hyperbolic space to encapsulate context among video snippets within each action, while inter-action edges leverage Euclidean space to capture similar semantics between different actions. Through graph convolution on each scale, CGCN enables the acquisition of richer and context-aware video representations. Experiments demonstrate CGCN outperforms the second-best method by 4.5%/0.9% and 4.3%/0.9% mAP on the ActivityNet and THUMOS14 datasets in one-shot/five-shot scenarios, respectively, at [email protected]. The source code can be found in <span><span>https://github.com/mugenggeng/CGCN.git</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50365,"journal":{"name":"Information Processing & Management","volume":null,"pages":null},"PeriodicalIF":7.4000,"publicationDate":"2024-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"CGCN: Context graph convolutional network for few-shot temporal action localization\",\"authors\":\"Shihui Zhang , Houlin Wang , Lei Wang , Xueqiang Han , Qing Tian\",\"doi\":\"10.1016/j.ipm.2024.103926\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Localizing human actions in videos has attracted extensive attention from industry and academia. Few-Shot Temporal Action Localization (FS-TAL) aims to detect human actions in untrimmed videos using a limited number of training samples. Existing FS-TAL methods usually ignore the semantic context between video snippets, making it difficult to detect actions during the query process. In this paper, we propose a novel FS-TAL method named Context Graph Convolutional Network (CGCN) which employs multi-scale graph convolution to aggregate semantic context between video snippets in addition to exploiting their temporal context. Specifically, CGCN constructs a graph for each scale of a video, where each video snippet is a node, and the relationships between the snippets are edges. There are three types of edges, namely sequence edges, intra-action edges, and inter-action edges. CGCN establishes sequence edges to enhance temporal expression. Intra-action edges utilize hyperbolic space to encapsulate context among video snippets within each action, while inter-action edges leverage Euclidean space to capture similar semantics between different actions. Through graph convolution on each scale, CGCN enables the acquisition of richer and context-aware video representations. Experiments demonstrate CGCN outperforms the second-best method by 4.5%/0.9% and 4.3%/0.9% mAP on the ActivityNet and THUMOS14 datasets in one-shot/five-shot scenarios, respectively, at [email protected]. The source code can be found in <span><span>https://github.com/mugenggeng/CGCN.git</span><svg><path></path></svg></span>.</div></div>\",\"PeriodicalId\":50365,\"journal\":{\"name\":\"Information Processing & Management\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":7.4000,\"publicationDate\":\"2024-10-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Information Processing & Management\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0306457324002851\",\"RegionNum\":1,\"RegionCategory\":\"管理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Processing & Management","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0306457324002851","RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
CGCN: Context graph convolutional network for few-shot temporal action localization
Localizing human actions in videos has attracted extensive attention from industry and academia. Few-Shot Temporal Action Localization (FS-TAL) aims to detect human actions in untrimmed videos using a limited number of training samples. Existing FS-TAL methods usually ignore the semantic context between video snippets, making it difficult to detect actions during the query process. In this paper, we propose a novel FS-TAL method named Context Graph Convolutional Network (CGCN) which employs multi-scale graph convolution to aggregate semantic context between video snippets in addition to exploiting their temporal context. Specifically, CGCN constructs a graph for each scale of a video, where each video snippet is a node, and the relationships between the snippets are edges. There are three types of edges, namely sequence edges, intra-action edges, and inter-action edges. CGCN establishes sequence edges to enhance temporal expression. Intra-action edges utilize hyperbolic space to encapsulate context among video snippets within each action, while inter-action edges leverage Euclidean space to capture similar semantics between different actions. Through graph convolution on each scale, CGCN enables the acquisition of richer and context-aware video representations. Experiments demonstrate CGCN outperforms the second-best method by 4.5%/0.9% and 4.3%/0.9% mAP on the ActivityNet and THUMOS14 datasets in one-shot/five-shot scenarios, respectively, at [email protected]. The source code can be found in https://github.com/mugenggeng/CGCN.git.
期刊介绍:
Information Processing and Management is dedicated to publishing cutting-edge original research at the convergence of computing and information science. Our scope encompasses theory, methods, and applications across various domains, including advertising, business, health, information science, information technology marketing, and social computing.
We aim to cater to the interests of both primary researchers and practitioners by offering an effective platform for the timely dissemination of advanced and topical issues in this interdisciplinary field. The journal places particular emphasis on original research articles, research survey articles, research method articles, and articles addressing critical applications of research. Join us in advancing knowledge and innovation at the intersection of computing and information science.