{"title":"一种用于视频增强的多层可视化语言","authors":"Danny Zhu, M. Veloso","doi":"10.1145/3139295.3139307","DOIUrl":null,"url":null,"abstract":"There are many tasks that humans perform that involve observing video streams, as well as tracking objects or quantities related to the events depicted in the video, that can be made more transparent by the addition of appropriate drawings to a video, e.g., tracking the behavior of autonomous robots or following the motion of players across a soccer field. We describe a specification of a general means of describing groups of time-varying discrete visualizations, as well as a demonstration of overlaying those visualizations onto videos in an augmented reality manner so as to situate them in a real-world context, when such a context is available and meaningful. Creating such videos can be especially useful in the case of autonomous agents operating in the real world; we demonstrate our visualization procedures on two example robotic domains. We take the complex algorithms controlling the robots' actions in the real world and create videos that are much more informative than the original plain videos.","PeriodicalId":92446,"journal":{"name":"SIGGRAPH Asia 2017 Symposium on Visualization. SIGGRAPH Asia Symposium on Visualization (2017 : Bangkok, Thailand)","volume":"43 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2017-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A multi-layered visualization language for video augmentation\",\"authors\":\"Danny Zhu, M. Veloso\",\"doi\":\"10.1145/3139295.3139307\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"There are many tasks that humans perform that involve observing video streams, as well as tracking objects or quantities related to the events depicted in the video, that can be made more transparent by the addition of appropriate drawings to a video, e.g., tracking the behavior of autonomous robots or following the motion of players across a soccer field. We describe a specification of a general means of describing groups of time-varying discrete visualizations, as well as a demonstration of overlaying those visualizations onto videos in an augmented reality manner so as to situate them in a real-world context, when such a context is available and meaningful. Creating such videos can be especially useful in the case of autonomous agents operating in the real world; we demonstrate our visualization procedures on two example robotic domains. We take the complex algorithms controlling the robots' actions in the real world and create videos that are much more informative than the original plain videos.\",\"PeriodicalId\":92446,\"journal\":{\"name\":\"SIGGRAPH Asia 2017 Symposium on Visualization. SIGGRAPH Asia Symposium on Visualization (2017 : Bangkok, Thailand)\",\"volume\":\"43 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2017-11-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"SIGGRAPH Asia 2017 Symposium on Visualization. SIGGRAPH Asia Symposium on Visualization (2017 : Bangkok, Thailand)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3139295.3139307\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"SIGGRAPH Asia 2017 Symposium on Visualization. SIGGRAPH Asia Symposium on Visualization (2017 : Bangkok, Thailand)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3139295.3139307","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A multi-layered visualization language for video augmentation
There are many tasks that humans perform that involve observing video streams, as well as tracking objects or quantities related to the events depicted in the video, that can be made more transparent by the addition of appropriate drawings to a video, e.g., tracking the behavior of autonomous robots or following the motion of players across a soccer field. We describe a specification of a general means of describing groups of time-varying discrete visualizations, as well as a demonstration of overlaying those visualizations onto videos in an augmented reality manner so as to situate them in a real-world context, when such a context is available and meaningful. Creating such videos can be especially useful in the case of autonomous agents operating in the real world; we demonstrate our visualization procedures on two example robotic domains. We take the complex algorithms controlling the robots' actions in the real world and create videos that are much more informative than the original plain videos.