{"title":"Urban Movie Map for Walkers: Route View Synthesis using 360° Videos","authors":"Naoki Sugimoto, Toru Okubo, K. Aizawa","doi":"10.1145/3372278.3390707","DOIUrl":null,"url":null,"abstract":"We propose a movie map for walkers based on synthesized street walking views along routes in a particular area. From the perspectives of walkers, we captured a number of omnidirectional videos along streets in the target area (1km2 around Kyoto Station). We captured a separate video for each street. We then performed simultaneous localization and mapping to obtain camera poses from key video frames in all of the videos and adjusted the coordinates based on a map of the area using reference points. To join one video to another smoothly at intersections, we identified frames of video intersection based on camera locations and visual feature matching. Finally, we generated moving route views by connecting the omnidirectional videos based on the alignment of the cameras. To improve smoothness at intersections, we generated rotational views by mixing video intersection frames from two videos. The results demonstrate that our method can precisely identify intersection frames and generate smooth connections between videos at intersections.","PeriodicalId":158014,"journal":{"name":"Proceedings of the 2020 International Conference on Multimedia Retrieval","volume":"11 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2020 International Conference on Multimedia Retrieval","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3372278.3390707","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3
Abstract
We propose a movie map for walkers based on synthesized street walking views along routes in a particular area. From the perspectives of walkers, we captured a number of omnidirectional videos along streets in the target area (1km2 around Kyoto Station). We captured a separate video for each street. We then performed simultaneous localization and mapping to obtain camera poses from key video frames in all of the videos and adjusted the coordinates based on a map of the area using reference points. To join one video to another smoothly at intersections, we identified frames of video intersection based on camera locations and visual feature matching. Finally, we generated moving route views by connecting the omnidirectional videos based on the alignment of the cameras. To improve smoothness at intersections, we generated rotational views by mixing video intersection frames from two videos. The results demonstrate that our method can precisely identify intersection frames and generate smooth connections between videos at intersections.