PopStage: The Generation of Stage Cross-Editing Video Based on Spatio-Temporal Matching

Dawon Lee, Jung Eun Yoo, Kyungmin Cho, Bumki Kim, Gyeonghun Im, Jun-yong Noh
{"title":"PopStage: The Generation of Stage Cross-Editing Video Based on Spatio-Temporal Matching","authors":"Dawon Lee, Jung Eun Yoo, Kyungmin Cho, Bumki Kim, Gyeonghun Im, Jun-yong Noh","doi":"10.1145/3550454.3555467","DOIUrl":null,"url":null,"abstract":"StageMix is a mixed video that is created by concatenating the segments from various performance videos of an identical song in a visually smooth manner by matching the main subject's silhouette presented in the frame. We introduce PopStage, which allows users to generate a StageMix automatically. PopStage is designed based on the StageMix Editing Guideline that we established by interviewing creators as well as observing their workflows. PopStage consists of two main steps: finding an editing path and generating a transition effect at a transition point. Using a reward function that favors visual connection and the optimality of transition timing across the videos, we obtain the optimal path that maximizes the sum of rewards through dynamic programming. Given the optimal path, PopStage then aligns the silhouettes of the main subject from the transitioning video pair to enhance the visual connection at the transition point. The virtual camera view is next optimized to remove the black areas that are often created due to the transformation needed for silhouette alignment, while reducing pixel loss. In this process, we enforce the view to be the maximum size while maintaining the temporal continuity across the frames. Experimental results show that PopStage can generate a StageMix of a similar quality to those produced by professional creators in a highly reduced production time.","PeriodicalId":7121,"journal":{"name":"ACM Trans. Graph.","volume":"25 1","pages":"1 - 13"},"PeriodicalIF":0.0000,"publicationDate":"2022-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM Trans. Graph.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3550454.3555467","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

StageMix is a mixed video that is created by concatenating the segments from various performance videos of an identical song in a visually smooth manner by matching the main subject's silhouette presented in the frame. We introduce PopStage, which allows users to generate a StageMix automatically. PopStage is designed based on the StageMix Editing Guideline that we established by interviewing creators as well as observing their workflows. PopStage consists of two main steps: finding an editing path and generating a transition effect at a transition point. Using a reward function that favors visual connection and the optimality of transition timing across the videos, we obtain the optimal path that maximizes the sum of rewards through dynamic programming. Given the optimal path, PopStage then aligns the silhouettes of the main subject from the transitioning video pair to enhance the visual connection at the transition point. The virtual camera view is next optimized to remove the black areas that are often created due to the transformation needed for silhouette alignment, while reducing pixel loss. In this process, we enforce the view to be the maximum size while maintaining the temporal continuity across the frames. Experimental results show that PopStage can generate a StageMix of a similar quality to those produced by professional creators in a highly reduced production time.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
PopStage:基于时空匹配的舞台交叉剪辑视频生成
StageMix是一款混合视频,它将同一首歌曲的各种表演视频片段拼接在一起,通过匹配画面中呈现的主角轮廓,以视觉上流畅的方式制作而成。我们介绍PopStage,它允许用户自动生成StageMix。PopStage是基于StageMix编辑指南设计的,我们通过采访创作者以及观察他们的工作流程建立了StageMix编辑指南。PopStage包括两个主要步骤:找到编辑路径和在过渡点生成过渡效果。使用奖励功能
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
LuisaRender: A High-Performance Rendering Framework with Layered and Unified Interfaces on Stream Architectures BoolSurf: Boolean Operations on Surfaces SkinMixer: Blending 3D Animated Models PopStage: The Generation of Stage Cross-Editing Video Based on Spatio-Temporal Matching QuadStream: A Quad-Based Scene Streaming Architecture for Novel Viewpoint Reconstruction
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1