Shagan Sah, Thang Nguyen, Miguel Domínguez, F. Such, R. Ptucha
{"title":"用于视频理解的时间导向高斯注意","authors":"Shagan Sah, Thang Nguyen, Miguel Domínguez, F. Such, R. Ptucha","doi":"10.1109/CVPRW.2017.274","DOIUrl":null,"url":null,"abstract":"Recent advances in video understanding are enabling incredible developments in video search, summarization, automatic captioning and human computer interaction. Attention mechanisms are a powerful way to steer focus onto different sections of the video. Existing mechanisms are driven by prior training probabilities and require input instances of identical temporal duration. We introduce an intuitive video understanding framework which combines continuous attention mechanisms over a family of Gaussian distributions with a hierarchical based video representation. The hierarchical framework enables efficient abstract temporal representations of video. Video attributes steer the attention mechanism intelligently independent of video length. Our fully learnable end-to-end approach helps predict salient temporal regions of action/objects in the video. We demonstrate state-of-the-art captioning results on the popular MSVD, MSR-VTT and M-VAD video datasets.","PeriodicalId":6668,"journal":{"name":"2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)","volume":"64 1","pages":"2208-2216"},"PeriodicalIF":0.0000,"publicationDate":"2017-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"Temporally Steered Gaussian Attention for Video Understanding\",\"authors\":\"Shagan Sah, Thang Nguyen, Miguel Domínguez, F. Such, R. Ptucha\",\"doi\":\"10.1109/CVPRW.2017.274\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Recent advances in video understanding are enabling incredible developments in video search, summarization, automatic captioning and human computer interaction. Attention mechanisms are a powerful way to steer focus onto different sections of the video. Existing mechanisms are driven by prior training probabilities and require input instances of identical temporal duration. We introduce an intuitive video understanding framework which combines continuous attention mechanisms over a family of Gaussian distributions with a hierarchical based video representation. The hierarchical framework enables efficient abstract temporal representations of video. Video attributes steer the attention mechanism intelligently independent of video length. Our fully learnable end-to-end approach helps predict salient temporal regions of action/objects in the video. We demonstrate state-of-the-art captioning results on the popular MSVD, MSR-VTT and M-VAD video datasets.\",\"PeriodicalId\":6668,\"journal\":{\"name\":\"2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)\",\"volume\":\"64 1\",\"pages\":\"2208-2216\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2017-07-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CVPRW.2017.274\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CVPRW.2017.274","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Temporally Steered Gaussian Attention for Video Understanding
Recent advances in video understanding are enabling incredible developments in video search, summarization, automatic captioning and human computer interaction. Attention mechanisms are a powerful way to steer focus onto different sections of the video. Existing mechanisms are driven by prior training probabilities and require input instances of identical temporal duration. We introduce an intuitive video understanding framework which combines continuous attention mechanisms over a family of Gaussian distributions with a hierarchical based video representation. The hierarchical framework enables efficient abstract temporal representations of video. Video attributes steer the attention mechanism intelligently independent of video length. Our fully learnable end-to-end approach helps predict salient temporal regions of action/objects in the video. We demonstrate state-of-the-art captioning results on the popular MSVD, MSR-VTT and M-VAD video datasets.