首页 > 最新文献

2011 Conference for Visual Media Production最新文献

英文 中文
Automatic Object Segmentation from Calibrated Images 自动目标分割校准图像
Pub Date : 2011-11-16 DOI: 10.1109/CVMP.2011.21
N. Campbell, George Vogiatzis, Carlos Hernández, R. Cipolla
This paper addresses the problem of automatically obtaining the object/background segmentation of a rigid 3D object observed in a set of images that have been calibrated for camera pose and intrinsics. Such segmentations can be used to obtain a shape representation of a potentially texture-less object by computing a visual hull. We propose an automatic approach where the object to be segmented is identified by the pose of the cameras instead of user input such as 2D bounding rectangles or brush-strokes. The key behind our method is a pairwise MRF framework that combines (a) foreground/background appearance models, (b) epipolar constraints and (c) weak stereo correspondence into a single segmentation cost function that can be efficiently solved by Graph-cuts. The segmentation thus obtained is further improved using silhouette coherency and then used to update the foreground/background appearance models which are fed into the next Graph-cut computation. These two steps are iterated until segmentation convergences. Our method can automatically provide a 3D surface representation even in texture-less scenes where MVS methods might fail. Furthermore, it confers improved performance in images where the object is not readily separable from the background in colour space, an area that previous segmentation approaches have found challenging.
本文解决了在一组经过相机姿态和内在特性校准的图像中观察到的刚性3D物体的自动获得物体/背景分割的问题。这样的分割可用于通过计算视觉船体来获得潜在的无纹理物体的形状表示。我们提出了一种自动方法,其中要分割的对象是通过相机的姿势来识别的,而不是用户输入,如2D边界矩形或笔触。我们的方法背后的关键是一个成对的MRF框架,它结合了(a)前景/背景外观模型,(b)极面约束和(c)弱立体对应到一个单一的分割成本函数中,可以通过Graph-cuts有效地求解。利用轮廓一致性进一步改进分割,然后用于更新前景/背景外观模型,这些模型被输入到下一个图切计算中。这两个步骤迭代,直到分割收敛。我们的方法可以自动提供3D表面表示,即使在没有纹理的场景中,MVS方法可能会失败。此外,它在物体在色彩空间中不易与背景分离的图像中提供了改进的性能,这是以前分割方法发现具有挑战性的领域。
{"title":"Automatic Object Segmentation from Calibrated Images","authors":"N. Campbell, George Vogiatzis, Carlos Hernández, R. Cipolla","doi":"10.1109/CVMP.2011.21","DOIUrl":"https://doi.org/10.1109/CVMP.2011.21","url":null,"abstract":"This paper addresses the problem of automatically obtaining the object/background segmentation of a rigid 3D object observed in a set of images that have been calibrated for camera pose and intrinsics. Such segmentations can be used to obtain a shape representation of a potentially texture-less object by computing a visual hull. We propose an automatic approach where the object to be segmented is identified by the pose of the cameras instead of user input such as 2D bounding rectangles or brush-strokes. The key behind our method is a pairwise MRF framework that combines (a) foreground/background appearance models, (b) epipolar constraints and (c) weak stereo correspondence into a single segmentation cost function that can be efficiently solved by Graph-cuts. The segmentation thus obtained is further improved using silhouette coherency and then used to update the foreground/background appearance models which are fed into the next Graph-cut computation. These two steps are iterated until segmentation convergences. Our method can automatically provide a 3D surface representation even in texture-less scenes where MVS methods might fail. Furthermore, it confers improved performance in images where the object is not readily separable from the background in colour space, an area that previous segmentation approaches have found challenging.","PeriodicalId":167135,"journal":{"name":"2011 Conference for Visual Media Production","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116279558","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 42
Cooperative patch-based 3D surface tracking 基于协作补丁的三维表面跟踪
Pub Date : 2011-11-16 DOI: 10.1109/CVMP.2011.14
M. Klaudiny, A. Hilton
This paper presents a novel dense motion capture technique which creates a temporally consistent mesh sequence from several calibrated and synchronised video sequences of a dynamic object. A surface patch model based on the topology of a user-specified reference mesh is employed to track the surface of the object over time. Multi-view 3D matching of surface patches using a novel cooperative minimisation approach provides initial motion estimates which are robust to large, rapid non-rigid changes of shape. A Laplacian deformation subsequently regularises the motion of the whole mesh using the weighted vertex displacements as soft constraints. An unregistered surface geometry independently reconstructed at each frame is incorporated as a shape prior to improve the quality of tracking. The method is evaluated in a challenging scenario of facial performance capture. Results demonstrate accurate tracking of fast, complex expressions over long sequences without use of markers or a pattern.
本文提出了一种新颖的密集运动捕捉技术,该技术从动态对象的几个校准和同步视频序列中创建一个时间一致的网格序列。采用基于用户指定参考网格拓扑结构的表面贴片模型随时间跟踪对象的表面。使用新颖的协作最小化方法对表面斑块进行多视图3D匹配,提供初始运动估计,该估计对大而快速的非刚性形状变化具有鲁棒性。拉普拉斯变形随后使用加权顶点位移作为软约束来规范整个网格的运动。在每帧独立重建的未配准表面几何形状被合并为形状,以提高跟踪质量。该方法在一个具有挑战性的面部动作捕捉场景中进行了评估。结果显示准确跟踪快速,复杂的表达在长序列不使用标记或模式。
{"title":"Cooperative patch-based 3D surface tracking","authors":"M. Klaudiny, A. Hilton","doi":"10.1109/CVMP.2011.14","DOIUrl":"https://doi.org/10.1109/CVMP.2011.14","url":null,"abstract":"This paper presents a novel dense motion capture technique which creates a temporally consistent mesh sequence from several calibrated and synchronised video sequences of a dynamic object. A surface patch model based on the topology of a user-specified reference mesh is employed to track the surface of the object over time. Multi-view 3D matching of surface patches using a novel cooperative minimisation approach provides initial motion estimates which are robust to large, rapid non-rigid changes of shape. A Laplacian deformation subsequently regularises the motion of the whole mesh using the weighted vertex displacements as soft constraints. An unregistered surface geometry independently reconstructed at each frame is incorporated as a shape prior to improve the quality of tracking. The method is evaluated in a challenging scenario of facial performance capture. Results demonstrate accurate tracking of fast, complex expressions over long sequences without use of markers or a pattern.","PeriodicalId":167135,"journal":{"name":"2011 Conference for Visual Media Production","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129878895","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Multi-camera Scheduling for Video Production 视频制作的多摄像机调度
Pub Date : 2011-11-16 DOI: 10.1109/CVMP.2011.8
F. Daniyal, A. Cavallaro
We present a novel algorithm for automated video production based on content ranking. The proposed algorithm generates videos by performing camera selection while minimizing the number of inter-camera switch. We model the problem as a finite horizon Partially Observable Markov Decision Process over temporal windows and we use a multivariate Gaussian distribution to represent the content-quality score for each camera. The performance of the proposed approach is demonstrated on a multi-camera setup of fixed cameras with partially overlapping fields of view. Subjective experiments based on the Turing test confirmed the quality of the automatically produced videos. The proposed approach is also compared with recent methods based on Recursive Decision and on Dynamic Bayesian Networks and its results outperform both methods.
提出了一种基于内容排序的视频自动制作算法。该算法在最小化摄像机间切换次数的同时进行摄像机选择,生成视频。我们将问题建模为时间窗口上的有限视界部分可观察马尔可夫决策过程,并使用多元高斯分布来表示每个摄像机的内容质量分数。在视场部分重叠的固定摄像机的多摄像机设置中验证了该方法的性能。基于图灵测试的主观实验证实了自动生成视频的质量。并将该方法与基于递归决策和动态贝叶斯网络的方法进行了比较,结果表明该方法优于这两种方法。
{"title":"Multi-camera Scheduling for Video Production","authors":"F. Daniyal, A. Cavallaro","doi":"10.1109/CVMP.2011.8","DOIUrl":"https://doi.org/10.1109/CVMP.2011.8","url":null,"abstract":"We present a novel algorithm for automated video production based on content ranking. The proposed algorithm generates videos by performing camera selection while minimizing the number of inter-camera switch. We model the problem as a finite horizon Partially Observable Markov Decision Process over temporal windows and we use a multivariate Gaussian distribution to represent the content-quality score for each camera. The performance of the proposed approach is demonstrated on a multi-camera setup of fixed cameras with partially overlapping fields of view. Subjective experiments based on the Turing test confirmed the quality of the automatically produced videos. The proposed approach is also compared with recent methods based on Recursive Decision and on Dynamic Bayesian Networks and its results outperform both methods.","PeriodicalId":167135,"journal":{"name":"2011 Conference for Visual Media Production","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128694956","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 38
Flowlab - An Interactive Tool for Editing Dense Image Correspondences 一个用于编辑密集图像对应的交互式工具
Pub Date : 2011-11-16 DOI: 10.1109/CVMP.2011.13
F. Klose, K. Ruhl, C. Lipski, M. Magnor
Finding dense correspondences between two images is a well-researched but still unsolved problem. For various tasks in computer graphics, e.g. image interpolation, obtaining plausible correspondences is a vital component. We present an interactive tool that allows the user to modify and correct dense correspondence maps between two given images. Incorporating state-of-the art algorithms in image segmentation, correspondence estimation and optical flow, our tool assists the user in selecting and correcting mismatched correspondences.
寻找两幅图像之间的密集对应关系是一个研究得很好的问题,但仍然没有解决。对于计算机图形学中的各种任务,例如图像插值,获得合理的对应关系是一个至关重要的组成部分。我们提出了一个交互式工具,允许用户修改和纠正两个给定图像之间的密集对应映射。结合最先进的算法在图像分割,对应估计和光流,我们的工具帮助用户选择和纠正不匹配的对应。
{"title":"Flowlab - An Interactive Tool for Editing Dense Image Correspondences","authors":"F. Klose, K. Ruhl, C. Lipski, M. Magnor","doi":"10.1109/CVMP.2011.13","DOIUrl":"https://doi.org/10.1109/CVMP.2011.13","url":null,"abstract":"Finding dense correspondences between two images is a well-researched but still unsolved problem. For various tasks in computer graphics, e.g. image interpolation, obtaining plausible correspondences is a vital component. We present an interactive tool that allows the user to modify and correct dense correspondence maps between two given images. Incorporating state-of-the art algorithms in image segmentation, correspondence estimation and optical flow, our tool assists the user in selecting and correcting mismatched correspondences.","PeriodicalId":167135,"journal":{"name":"2011 Conference for Visual Media Production","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116132908","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Robust Color Correction for Stereo 鲁棒色彩校正立体声
Pub Date : 2011-11-16 DOI: 10.1109/CVMP.2011.18
H. Faridul, J. Stauder, A. Trémeau
Color difference between views of a stereo pair is a challenging problem. Applications such as compression of stereo image demands the compensation of color differences which is typically done by methods called color mapping. Color mapping is based on feature correspondences. From these feature correspondences, color correspondences are generated which is ultimately used for the color mapping model. This paper focuses on detection of outliers in the feature correspondences. We propose novel iterative outlier removal method which exploits the neighborhood color information of the feature correspondences. From the analysis of our experimental results and comparing with existing methods we conclude by arguing that spatial color neighborhood information around the feature correspondences along with an iterative color mapping can detect outliers in general and can bring a robust color correction.
立体视点之间的色差是一个具有挑战性的问题。诸如立体图像压缩之类的应用需要对色差进行补偿,这通常是通过称为颜色映射的方法来完成的。颜色映射基于特征对应。从这些特征对应中生成颜色对应,最终用于颜色映射模型。本文主要研究特征对应中异常值的检测。我们提出了一种利用特征对应的邻域颜色信息的迭代离群值去除方法。通过对实验结果的分析和与现有方法的比较,我们得出结论,认为围绕特征对应的空间颜色邻域信息以及迭代的颜色映射通常可以检测出异常值,并且可以带来鲁棒的颜色校正。
{"title":"Robust Color Correction for Stereo","authors":"H. Faridul, J. Stauder, A. Trémeau","doi":"10.1109/CVMP.2011.18","DOIUrl":"https://doi.org/10.1109/CVMP.2011.18","url":null,"abstract":"Color difference between views of a stereo pair is a challenging problem. Applications such as compression of stereo image demands the compensation of color differences which is typically done by methods called color mapping. Color mapping is based on feature correspondences. From these feature correspondences, color correspondences are generated which is ultimately used for the color mapping model. This paper focuses on detection of outliers in the feature correspondences. We propose novel iterative outlier removal method which exploits the neighborhood color information of the feature correspondences. From the analysis of our experimental results and comparing with existing methods we conclude by arguing that spatial color neighborhood information around the feature correspondences along with an iterative color mapping can detect outliers in general and can bring a robust color correction.","PeriodicalId":167135,"journal":{"name":"2011 Conference for Visual Media Production","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131379349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Real-time Person Tracking in High-resolution Panoramic Video for Automated Broadcast Production 用于自动广播制作的高分辨率全景视频中的实时人物跟踪
Pub Date : 2011-11-16 DOI: 10.1109/CVMP.2011.9
Rene Kaiser, M. Thaler, Andreas Kriechbaum, Hannes Fassold, W. Bailer, Jakub Rosner
For enabling immersive user experiences for interactive TV services and automating camera view selection and framing, knowledge of the location of persons in a scene is essential. We describe an architecture for detecting and tracking persons in high-resolution panoramic video streams, obtained from the Omni Cam, a panoramic camera stitching video streams from 6 HD resolution tiles. We use a CUDA accelerated feature point tracker, a blob detector and a CUDA HOG person detector, which are used for region tracking in each of the tiles before fusing the results for the entire panorama. In this paper we focus on the application of the HOG person detector in real-time and the speedup of the feature point tracker by porting it to NVIDIA's Fermi architecture. Evaluations indicate significant speedup for our feature point tracker implementation, enabling the entire process in a real-time system.
为了实现交互式电视服务的沉浸式用户体验和自动相机视图选择和取景,了解场景中人物的位置至关重要。我们描述了一种用于在高分辨率全景视频流中检测和跟踪人员的架构,该视频流来自Omni Cam, Omni Cam是一种全景相机,从6个高清分辨率块拼接视频流。我们使用CUDA加速特征点跟踪器,blob检测器和CUDA HOG人检测器,在融合整个全景图的结果之前,它们用于每个瓷砖的区域跟踪。本文重点研究了HOG人检测器的实时应用,并将HOG特征点跟踪器移植到NVIDIA的Fermi架构中,提高了HOG特征点跟踪器的速度。评估表明我们的特征点跟踪器实现显著加速,使整个过程在实时系统中实现。
{"title":"Real-time Person Tracking in High-resolution Panoramic Video for Automated Broadcast Production","authors":"Rene Kaiser, M. Thaler, Andreas Kriechbaum, Hannes Fassold, W. Bailer, Jakub Rosner","doi":"10.1109/CVMP.2011.9","DOIUrl":"https://doi.org/10.1109/CVMP.2011.9","url":null,"abstract":"For enabling immersive user experiences for interactive TV services and automating camera view selection and framing, knowledge of the location of persons in a scene is essential. We describe an architecture for detecting and tracking persons in high-resolution panoramic video streams, obtained from the Omni Cam, a panoramic camera stitching video streams from 6 HD resolution tiles. We use a CUDA accelerated feature point tracker, a blob detector and a CUDA HOG person detector, which are used for region tracking in each of the tiles before fusing the results for the entire panorama. In this paper we focus on the application of the HOG person detector in real-time and the speedup of the feature point tracker by porting it to NVIDIA's Fermi architecture. Evaluations indicate significant speedup for our feature point tracker implementation, enabling the entire process in a real-time system.","PeriodicalId":167135,"journal":{"name":"2011 Conference for Visual Media Production","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133432035","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 28
Realtime Video Based Water Surface Approximation 基于水面逼近的实时视频
Pub Date : 2011-11-16 DOI: 10.1109/CVMP.2011.19
Chuan Li, M. Shaw, D. Pickup, D. Cosker, P. Willis, P. Hall
This paper describes an approach for automatically producing convincing water surfaces from video data in real time. Fluids simulation has long been studied in the Computer Graphics literature, but the methods developed are expensive and require input from highly trained artists. In contrast our method is a low cost Computer Vision based solution which requires only a single video as a source. Our output consists of an animated mesh of the water surface captured together with surface velocities and texture maps from the video data. As an example of what can be done with this data, a modified form of video textures is used to create naturalistic infinite transition loops of the captured water surface. We demonstrate our approach over a wide range of inputs, including quiescent lakes, breaking sea waves, and waterfalls. All source video we use are taken from a third-party publicly available database.
本文描述了一种基于视频数据实时自动生成逼真水面的方法。流体模拟在计算机图形学文献中已经研究了很长时间,但是所开发的方法是昂贵的,并且需要训练有素的艺术家的输入。相比之下,我们的方法是一种低成本的基于计算机视觉的解决方案,它只需要一个视频作为源。我们的输出包括水面的动画网格,以及从视频数据中捕获的水面速度和纹理图。作为一个使用这些数据可以做什么的例子,使用一种修改形式的视频纹理来创建捕获的水面的自然无限过渡循环。我们在广泛的输入中展示了我们的方法,包括静止的湖泊,破碎的海浪和瀑布。我们使用的所有源视频都取自第三方公开数据库。
{"title":"Realtime Video Based Water Surface Approximation","authors":"Chuan Li, M. Shaw, D. Pickup, D. Cosker, P. Willis, P. Hall","doi":"10.1109/CVMP.2011.19","DOIUrl":"https://doi.org/10.1109/CVMP.2011.19","url":null,"abstract":"This paper describes an approach for automatically producing convincing water surfaces from video data in real time. Fluids simulation has long been studied in the Computer Graphics literature, but the methods developed are expensive and require input from highly trained artists. In contrast our method is a low cost Computer Vision based solution which requires only a single video as a source. Our output consists of an animated mesh of the water surface captured together with surface velocities and texture maps from the video data. As an example of what can be done with this data, a modified form of video textures is used to create naturalistic infinite transition loops of the captured water surface. We demonstrate our approach over a wide range of inputs, including quiescent lakes, breaking sea waves, and waterfalls. All source video we use are taken from a third-party publicly available database.","PeriodicalId":167135,"journal":{"name":"2011 Conference for Visual Media Production","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126112554","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Projective Reconstruction from Incomplete Trajectories by Global and Local Constraints 基于全局和局部约束的不完全轨迹投影重建
Pub Date : 2011-11-16 DOI: 10.1109/CVMP.2011.15
H. Ackermann, B. Rosenhahn
The paper deals with projective shape and motion reconstruction by subspace iterations. A prerequisite of factorization-style algorithms is that all feature points need be observed in all images, a condition which is hardly realistic in real videos. We therefore address the problem of estimating structure and motion considering missing features. The proposed algorithm does not require initialization and uniformly handles all available data. The computed solution is global in the sense that it does not merge partial solutions incrementally or hierarchically. The global cost due to the factorization is further amended by local constraints to regularize and stabilize the estimations. It is shown how both costs can be jointly minimized in the presence of unobserved points. By synthetic and real image sequences with up to $60%$ missing data we demonstrate that our algorithm is accurate and reliable.
本文讨论了用子空间迭代法重建物体的投影形状和运动。分解式算法的一个先决条件是需要在所有图像中观察到所有特征点,这在真实视频中很难实现。因此,我们解决了考虑缺失特征的估计结构和运动的问题。该算法不需要初始化,并统一处理所有可用数据。计算的解决方案是全局的,因为它不会以增量或分层方式合并部分解决方案。通过局部约束进一步修正因式分解引起的全局代价,使估计规范化和稳定化。它展示了如何在未观测点存在的情况下共同最小化这两个成本。通过数据缺失率高达60%的合成图像序列和真实图像序列,验证了算法的准确性和可靠性。
{"title":"Projective Reconstruction from Incomplete Trajectories by Global and Local Constraints","authors":"H. Ackermann, B. Rosenhahn","doi":"10.1109/CVMP.2011.15","DOIUrl":"https://doi.org/10.1109/CVMP.2011.15","url":null,"abstract":"The paper deals with projective shape and motion reconstruction by subspace iterations. A prerequisite of factorization-style algorithms is that all feature points need be observed in all images, a condition which is hardly realistic in real videos. We therefore address the problem of estimating structure and motion considering missing features. The proposed algorithm does not require initialization and uniformly handles all available data. The computed solution is global in the sense that it does not merge partial solutions incrementally or hierarchically. The global cost due to the factorization is further amended by local constraints to regularize and stabilize the estimations. It is shown how both costs can be jointly minimized in the presence of unobserved points. By synthetic and real image sequences with up to $60%$ missing data we demonstrate that our algorithm is accurate and reliable.","PeriodicalId":167135,"journal":{"name":"2011 Conference for Visual Media Production","volume":"134 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133613074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Towards Moment Imagery: Automatic Cinemagraphs 走向瞬间意象:自动电影胶片
Pub Date : 2011-11-16 DOI: 10.1109/CVMP.2011.16
J. Tompkin, Fabrizio Pece, K. Subr, J. Kautz
The imagination of the online photographic community has recently been sparked by so-called cinema graphs: short, seamlessly looping animated GIF images created from video in which only parts of the image move. These cinema graphs capture the dynamics of one particular region in an image for dramatic effect, and provide the creator with control over what part of a moment to capture. We create a cinema graphs authoring tool combining video motion stabilisation, segmentation, interactive motion selection, motion loop detection and selection, and cinema graph rendering. Our work pushes toward the easy and versatile creation of moments that cannot be represented with still imagery.
最近,所谓的“电影图”激发了在线摄影社区的想象力:一种由视频创建的短而无缝循环的GIF动画图像,其中只有部分图像移动。这些电影图表捕捉图像中一个特定区域的动态,以达到戏剧性的效果,并为创作者提供控制捕捉瞬间的哪一部分的控制。我们创建了一个结合视频运动稳定、分割、交互式运动选择、运动循环检测和选择以及电影图形渲染的电影图形创作工具。我们的作品推动了简单而多样的创作,无法用静止的图像来表现。
{"title":"Towards Moment Imagery: Automatic Cinemagraphs","authors":"J. Tompkin, Fabrizio Pece, K. Subr, J. Kautz","doi":"10.1109/CVMP.2011.16","DOIUrl":"https://doi.org/10.1109/CVMP.2011.16","url":null,"abstract":"The imagination of the online photographic community has recently been sparked by so-called cinema graphs: short, seamlessly looping animated GIF images created from video in which only parts of the image move. These cinema graphs capture the dynamics of one particular region in an image for dramatic effect, and provide the creator with control over what part of a moment to capture. We create a cinema graphs authoring tool combining video motion stabilisation, segmentation, interactive motion selection, motion loop detection and selection, and cinema graph rendering. Our work pushes toward the easy and versatile creation of moments that cannot be represented with still imagery.","PeriodicalId":167135,"journal":{"name":"2011 Conference for Visual Media Production","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132747978","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 41
Disparity-Aware Stereo 3D Production Tools 差异意识立体3D生产工具
Pub Date : 2011-11-16 DOI: 10.1109/CVMP.2011.25
A. Smolic, Steven Poulakos, Simon Heinzle, P. Greisen, Manuel Lang, A. Sorkine-Hornung, Miquel A. Farre, N. Stefanoski, Oliver Wang, L. Schnyder, Rafael Monroy, M. Gross
Stereoscopic 3D (S3D) has reached wide levels of adoption in consumer and professional markets. However, production of high quality S3D content is still a difficult and expensive art. Various S3D production tools and systems have been released recently to assist high quality content creation. This paper presents a number of such algorithms, tools and systems developed at Disney Research Zurich, which all make use of disparity-aware processing.
立体3D (S3D)已经在消费者和专业市场得到了广泛的应用。然而,制作高质量的S3D内容仍然是一门困难且昂贵的艺术。最近发布了各种S3D制作工具和系统,以帮助创建高质量的内容。本文介绍了许多这样的算法,工具和系统开发的迪斯尼研究苏黎世,它们都利用差异感知处理。
{"title":"Disparity-Aware Stereo 3D Production Tools","authors":"A. Smolic, Steven Poulakos, Simon Heinzle, P. Greisen, Manuel Lang, A. Sorkine-Hornung, Miquel A. Farre, N. Stefanoski, Oliver Wang, L. Schnyder, Rafael Monroy, M. Gross","doi":"10.1109/CVMP.2011.25","DOIUrl":"https://doi.org/10.1109/CVMP.2011.25","url":null,"abstract":"Stereoscopic 3D (S3D) has reached wide levels of adoption in consumer and professional markets. However, production of high quality S3D content is still a difficult and expensive art. Various S3D production tools and systems have been released recently to assist high quality content creation. This paper presents a number of such algorithms, tools and systems developed at Disney Research Zurich, which all make use of disparity-aware processing.","PeriodicalId":167135,"journal":{"name":"2011 Conference for Visual Media Production","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126943460","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
期刊
2011 Conference for Visual Media Production
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1