首页 > 最新文献

Proceedings of the 11th European Conference on Visual Media Production最新文献

英文 中文
Plasticine shading 橡皮泥阴影
Pub Date : 2014-11-13 DOI: 10.1145/2668904.2668933
L. Howell, Philip Child, P. Hall
Plasticine is a core material in the production of stop-motion animation. In some situations it is valuable to use computer simulation techniques to generate objects that appear to be made of plasticine. In order to render this material accurately, we present a new shading model which is based on the true physical properties of plasticine. We show that our new model represents the material approximately 20% more accurately than other existing state of the art surface shaders. Results are shown comparing our new model to state of the art models and to an existing solution used in production.
橡皮泥是定格动画制作的核心材料。在某些情况下,使用计算机模拟技术来生成似乎是由橡皮泥制成的物体是有价值的。为了准确地渲染这种材料,我们提出了一种基于橡皮泥真实物理特性的新的着色模型。我们表明,我们的新模型比其他现有的艺术表面着色器更准确地代表了大约20%的材料。结果显示了将我们的新模型与最先进的模型和生产中使用的现有解决方案进行比较。
{"title":"Plasticine shading","authors":"L. Howell, Philip Child, P. Hall","doi":"10.1145/2668904.2668933","DOIUrl":"https://doi.org/10.1145/2668904.2668933","url":null,"abstract":"Plasticine is a core material in the production of stop-motion animation. In some situations it is valuable to use computer simulation techniques to generate objects that appear to be made of plasticine. In order to render this material accurately, we present a new shading model which is based on the true physical properties of plasticine. We show that our new model represents the material approximately 20% more accurately than other existing state of the art surface shaders. Results are shown comparing our new model to state of the art models and to an existing solution used in production.","PeriodicalId":401915,"journal":{"name":"Proceedings of the 11th European Conference on Visual Media Production","volume":"1965 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125693119","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimizing large scale CUDA applications using input data specific optimizations 使用特定于输入数据的优化优化大规模CUDA应用程序
Pub Date : 2014-11-13 DOI: 10.1145/2668904.2668941
B. Taskov
CUDA applications and general-purpose GPU (GPGPU) programs are widely used nowadays for solving computationally intensive tasks. There is a substantial effort in the form of tools, papers, books and features that are targeted at GPGPU APIs such as CUDA and OpenCL. The GPU architecture, being substantially different from the traditional CPU ones (x86, PowerPC, ARM) requires a different approach and introduces a different set of challenges. Apart from the traditional and well examined GPGPU problems - such as memory access patterns, parallel designs and occupancy, there is yet another really important, but not well studied setback - from one point onward, the bigger the CUDA application gets (in terms of lines of code) the slower it becomes, mostly due to register spilling. Register spilling is more or less a problem for most of the available architectures today, but it can easily become a massive bottleneck on the GPU due to its nature. We are going to examine in detail why this happens, what are the common ways to solve it, and we are going to propose one simple, presently undocumented approach that may be used to alleviate the issue in some situations. For the purpose of this paper we will focus on the NVidia Fermi Architecture
目前,CUDA应用程序和通用GPU (GPGPU)程序被广泛用于解决计算密集型任务。针对GPGPU api(如CUDA和OpenCL)的工具、论文、书籍和功能都做了大量的工作。GPU架构与传统的CPU架构(x86、PowerPC、ARM)有本质上的不同,需要采用不同的方法,并引入了不同的挑战。除了传统的GPGPU问题——比如内存访问模式、并行设计和占用,还有另一个非常重要的问题,但没有得到很好的研究——从某一点开始,CUDA应用程序越大(就代码行而言),它变得越慢,主要是由于寄存器溢出。寄存器溢出或多或少是当今大多数可用架构的问题,但由于其性质,它很容易成为GPU的巨大瓶颈。我们将详细研究为什么会发生这种情况,解决它的常见方法是什么,并且我们将提出一种简单的,目前未记录的方法,可用于缓解某些情况下的问题。为了本文的目的,我们将重点关注NVidia费米架构
{"title":"Optimizing large scale CUDA applications using input data specific optimizations","authors":"B. Taskov","doi":"10.1145/2668904.2668941","DOIUrl":"https://doi.org/10.1145/2668904.2668941","url":null,"abstract":"CUDA applications and general-purpose GPU (GPGPU) programs are widely used nowadays for solving computationally intensive tasks. There is a substantial effort in the form of tools, papers, books and features that are targeted at GPGPU APIs such as CUDA and OpenCL. The GPU architecture, being substantially different from the traditional CPU ones (x86, PowerPC, ARM) requires a different approach and introduces a different set of challenges. Apart from the traditional and well examined GPGPU problems - such as memory access patterns, parallel designs and occupancy, there is yet another really important, but not well studied setback - from one point onward, the bigger the CUDA application gets (in terms of lines of code) the slower it becomes, mostly due to register spilling. Register spilling is more or less a problem for most of the available architectures today, but it can easily become a massive bottleneck on the GPU due to its nature. We are going to examine in detail why this happens, what are the common ways to solve it, and we are going to propose one simple, presently undocumented approach that may be used to alleviate the issue in some situations. For the purpose of this paper we will focus on the NVidia Fermi Architecture","PeriodicalId":401915,"journal":{"name":"Proceedings of the 11th European Conference on Visual Media Production","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114698984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Estimating camera intrinsics from motion blur 从运动模糊中估计相机的特性
Pub Date : 2014-11-13 DOI: 10.1145/2668904.2668934
Alastair Barber, Matthew A. Brown, Paul Hogbin, D. Cosker
Estimating changes in camera parameters, such as motion, focal length and exposure time over a single frame or sequence of frames is an integral part of many computer vision applications. Rapid changes in these parameters often cause motion blur to be present in an image, which can make traditional methods of feature identification and tracking difficult. Here we present a method for estimating the scale changes brought about by change in focal length from a single motion-blurred frame. We also use the results from two seperate methods for determining the rotation of a pair of motion-blurred frames to estimate the exposure time of a frame (i.e. the shutter angle).
估计相机参数的变化,如运动,焦距和曝光时间在单帧或帧序列是许多计算机视觉应用的一个组成部分。这些参数的快速变化通常会导致图像中出现运动模糊,这使得传统的特征识别和跟踪方法变得困难。本文提出了一种估算单帧运动模糊图像焦距变化引起的尺度变化的方法。我们还使用两种不同方法的结果来确定一对运动模糊帧的旋转,以估计帧的曝光时间(即快门角度)。
{"title":"Estimating camera intrinsics from motion blur","authors":"Alastair Barber, Matthew A. Brown, Paul Hogbin, D. Cosker","doi":"10.1145/2668904.2668934","DOIUrl":"https://doi.org/10.1145/2668904.2668934","url":null,"abstract":"Estimating changes in camera parameters, such as motion, focal length and exposure time over a single frame or sequence of frames is an integral part of many computer vision applications. Rapid changes in these parameters often cause motion blur to be present in an image, which can make traditional methods of feature identification and tracking difficult. Here we present a method for estimating the scale changes brought about by change in focal length from a single motion-blurred frame. We also use the results from two seperate methods for determining the rotation of a pair of motion-blurred frames to estimate the exposure time of a frame (i.e. the shutter angle).","PeriodicalId":401915,"journal":{"name":"Proceedings of the 11th European Conference on Visual Media Production","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127622771","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Advanced video debanding 高级视频去带
Pub Date : 2014-11-13 DOI: 10.1145/2668904.2668912
G. Baugh, A. Kokaram, François Pitié
High efficiency video coding has made it possible to stream video over bandwidth constrained communication networks. Depending on bit rate requirements, a video encoder sacrifices some image details which can then introduce visual artefacts. Due to aggressive encoding a contouring staircase artefact called banding can be observed in image regions with very low texture. This paper presents a solution for removing banding artefacts using image filtering and dithering techniques. A new banding index (BI) metric is also presented for quantitatively measuring the amount of banding in an image. Using this BI metric, we assess how much banding YouTube video encoding introduces in a video test dataset. There is a debanding filter in ffmpeg called gradfun. We compare the results of our debanding technique with those of gradfun on the YouTube test dataset.
高效的视频编码使得在带宽受限的通信网络上传输视频成为可能。根据比特率要求,视频编码器会牺牲一些图像细节,从而引入视觉伪影。由于侵略性编码,在纹理非常低的图像区域可以观察到称为带状的轮廓阶梯伪影。本文提出了一种利用图像滤波和抖动技术去除带状伪影的方法。提出了一种新的条带指数(BI)度量,用于定量测量图像中的条带量。使用这个BI指标,我们评估了YouTube视频编码在视频测试数据集中引入了多少条带。在ffmpeg中有一个去带过滤器叫做gradfun。我们将我们的去条带技术的结果与YouTube测试数据集上的gradfun的结果进行了比较。
{"title":"Advanced video debanding","authors":"G. Baugh, A. Kokaram, François Pitié","doi":"10.1145/2668904.2668912","DOIUrl":"https://doi.org/10.1145/2668904.2668912","url":null,"abstract":"High efficiency video coding has made it possible to stream video over bandwidth constrained communication networks. Depending on bit rate requirements, a video encoder sacrifices some image details which can then introduce visual artefacts. Due to aggressive encoding a contouring staircase artefact called banding can be observed in image regions with very low texture. This paper presents a solution for removing banding artefacts using image filtering and dithering techniques. A new banding index (BI) metric is also presented for quantitatively measuring the amount of banding in an image. Using this BI metric, we assess how much banding YouTube video encoding introduces in a video test dataset. There is a debanding filter in ffmpeg called gradfun. We compare the results of our debanding technique with those of gradfun on the YouTube test dataset.","PeriodicalId":401915,"journal":{"name":"Proceedings of the 11th European Conference on Visual Media Production","volume":"268 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116620332","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Multi-clip video editing from a single viewpoint 多剪辑视频编辑从一个单一的观点
Pub Date : 2014-11-13 DOI: 10.1145/2668904.2668936
Vineet Gandhi, Rémi Ronfard, Michael Gleicher
We propose a framework for automatically generating multiple clips suitable for video editing by simulating pan-tilt-zoom camera movements within the frame of a single static camera. Assuming important actors and objects can be localized using computer vision techniques, our method requires only minimal user input to define the subject matter of each sub-clip. The composition of each sub-clip is automatically computed in a novel L1-norm optimization framework. Our approach encodes several common cinematographic practices into a single convex cost function minimization problem, resulting in aesthetically pleasing sub-clips which can easily be edited together using off-the-shelf multi-clip video editing software. We demonstrate our approach on five video sequences of a live theatre performance by generating multiple synchronized subclips for each sequence.
我们提出了一个框架,通过在单个静态摄像机的框架内模拟平移-倾斜-变焦摄像机的运动,自动生成适合视频编辑的多个剪辑。假设重要的演员和对象可以使用计算机视觉技术进行定位,我们的方法只需要最小的用户输入来定义每个子剪辑的主题。在一种新颖的l1范数优化框架中自动计算每个子剪辑的组成。我们的方法将几种常见的电影实践编码为单个凸成本函数最小化问题,从而产生美观的子剪辑,这些子剪辑可以使用现成的多剪辑视频编辑软件轻松地编辑在一起。我们通过为每个序列生成多个同步子剪辑来演示我们在五个现场戏剧表演视频序列上的方法。
{"title":"Multi-clip video editing from a single viewpoint","authors":"Vineet Gandhi, Rémi Ronfard, Michael Gleicher","doi":"10.1145/2668904.2668936","DOIUrl":"https://doi.org/10.1145/2668904.2668936","url":null,"abstract":"We propose a framework for automatically generating multiple clips suitable for video editing by simulating pan-tilt-zoom camera movements within the frame of a single static camera. Assuming important actors and objects can be localized using computer vision techniques, our method requires only minimal user input to define the subject matter of each sub-clip. The composition of each sub-clip is automatically computed in a novel L1-norm optimization framework. Our approach encodes several common cinematographic practices into a single convex cost function minimization problem, resulting in aesthetically pleasing sub-clips which can easily be edited together using off-the-shelf multi-clip video editing software. We demonstrate our approach on five video sequences of a live theatre performance by generating multiple synchronized subclips for each sequence.","PeriodicalId":401915,"journal":{"name":"Proceedings of the 11th European Conference on Visual Media Production","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116981011","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 40
Saliency-based parameter tuning for tone mapping 音调映射的基于显著性的参数调优
Pub Date : 2014-11-13 DOI: 10.1145/2668904.2668939
Xihe Gao, Stephen Brooks, D. Arnold
We present a saliency-based parameter tuning algorithm that can optimize the parameters of tone mapping operators automatically by minimizing the saliency distortion caused by the process of tone mapping. The algorithm employs an improved saliency detection model for HDR images, and the saliency distortion is quantified as the Kullback-Leibler divergence between the saliency distributions of the tone mapped images and those of the corresponding HDR images. We show that the minimization can be accomplished by employing an evolution strategy with individuals representing parameter settings and fitness values based on saliency distortion. The effectiveness of our algorithm is demonstrated through experiments using several tone mapping operators and test images.
提出了一种基于显著性的参数调整算法,该算法通过最小化音调映射过程中引起的显著性失真,自动优化音调映射算子的参数。该算法采用改进的HDR图像显著性检测模型,将显著性失真量化为色调映射图像与相应HDR图像显著性分布之间的Kullback-Leibler散度。我们表明,通过采用一种进化策略,个体代表参数设置和基于显著性扭曲的适应度值,可以实现最小化。通过若干色调映射算子和测试图像的实验,验证了算法的有效性。
{"title":"Saliency-based parameter tuning for tone mapping","authors":"Xihe Gao, Stephen Brooks, D. Arnold","doi":"10.1145/2668904.2668939","DOIUrl":"https://doi.org/10.1145/2668904.2668939","url":null,"abstract":"We present a saliency-based parameter tuning algorithm that can optimize the parameters of tone mapping operators automatically by minimizing the saliency distortion caused by the process of tone mapping. The algorithm employs an improved saliency detection model for HDR images, and the saliency distortion is quantified as the Kullback-Leibler divergence between the saliency distributions of the tone mapped images and those of the corresponding HDR images. We show that the minimization can be accomplished by employing an evolution strategy with individuals representing parameter settings and fitness values based on saliency distortion. The effectiveness of our algorithm is demonstrated through experiments using several tone mapping operators and test images.","PeriodicalId":401915,"journal":{"name":"Proceedings of the 11th European Conference on Visual Media Production","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115992869","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Interactive video asset retrieval using sketched queries 使用草图查询的交互式视频资产检索
Pub Date : 2014-11-13 DOI: 10.1145/2668904.2668940
Stuart James, J. Collomosse
We present a new algorithm for searching video repositories using free-hand sketches. Our queries express both appearance (color, shape) and motion attributes, as well as semantic properties (object labels) enabling hybrid queries to be specified. Unlike existing sketch based video retrieval (SBVR) systems that enable hybrid queries of this form, we do not adopt a model fitting/optimization approach to match at query-time. Rather, we create an efficiently searchable index via a novel space-time descriptor that encapsulates all these properties. The real-time performance yielded by our indexing approach enables interactive refinement of search results within a relevance feedback (RF) framework; a unique contribution to SBVR. We evaluate our system over 700 sports footage clips exhibiting a variety of clutter and motion conditions, demonstrating significant accuracy and speed gains over the state of the art.
我们提出了一种使用手绘草图搜索视频库的新算法。我们的查询既表达外观(颜色、形状)和运动属性,也表达语义属性(对象标签),从而可以指定混合查询。与支持这种形式的混合查询的现有基于草图的视频检索(SBVR)系统不同,我们没有采用模型拟合/优化方法在查询时进行匹配。相反,我们通过封装了所有这些属性的新颖时空描述符创建了一个有效的可搜索索引。我们的索引方法产生的实时性能可以在相关反馈(RF)框架内对搜索结果进行交互式优化;对SBVR的独特贡献。我们评估了我们的系统超过700个运动镜头剪辑,展示了各种各样的混乱和运动条件,展示了显著的准确性和速度增益。
{"title":"Interactive video asset retrieval using sketched queries","authors":"Stuart James, J. Collomosse","doi":"10.1145/2668904.2668940","DOIUrl":"https://doi.org/10.1145/2668904.2668940","url":null,"abstract":"We present a new algorithm for searching video repositories using free-hand sketches. Our queries express both appearance (color, shape) and motion attributes, as well as semantic properties (object labels) enabling hybrid queries to be specified. Unlike existing sketch based video retrieval (SBVR) systems that enable hybrid queries of this form, we do not adopt a model fitting/optimization approach to match at query-time. Rather, we create an efficiently searchable index via a novel space-time descriptor that encapsulates all these properties. The real-time performance yielded by our indexing approach enables interactive refinement of search results within a relevance feedback (RF) framework; a unique contribution to SBVR. We evaluate our system over 700 sports footage clips exhibiting a variety of clutter and motion conditions, demonstrating significant accuracy and speed gains over the state of the art.","PeriodicalId":401915,"journal":{"name":"Proceedings of the 11th European Conference on Visual Media Production","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129431030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Athlete pose estimation by non-sequential key-frame propagation 基于非顺序关键帧传播的运动员姿态估计
Pub Date : 2014-11-13 DOI: 10.1145/2668904.2668938
Mykyta Fastovets, Jean-Yves Guillemaut, A. Hilton
This paper considers the problem of estimating human pose in challenging monocular sports videos, where manual intervention is often required in order to obtain useful results. Fully automatic approaches focus on developing inference algorithms and probabilistic prior models based on learned measurements and often face challenges in generalisation beyond the learned dataset. This work expands on the idea of using an interactive model-based generative technique for accurately estimating the human pose from uncalibrated unconstrained monocular TV sports footage. A method of keyframe propagation is introduced to obtain reliable tracking from limited operator input by introducing the concepts of keyframe propagation and optimal keyframe selection assistance for the operator. Experimental results show that the approach produces results competitive with those produced with twice the number of manually annotated keyframes, halving the amount of interaction required.
本文考虑了具有挑战性的单目运动视频中人体姿态的估计问题,其中通常需要人工干预才能获得有用的结果。全自动方法侧重于开发基于学习测量的推理算法和概率先验模型,并且经常面临超越学习数据集的泛化挑战。这项工作扩展了使用基于交互式模型的生成技术的想法,用于从未校准的无约束单目电视体育镜头中准确估计人体姿势。通过引入关键帧传播和最优关键帧选择辅助的概念,提出了一种关键帧传播方法,在有限的算子输入条件下获得可靠的跟踪。实验结果表明,该方法产生的结果与手动注释关键帧数量的两倍产生的结果具有竞争力,所需交互量减半。
{"title":"Athlete pose estimation by non-sequential key-frame propagation","authors":"Mykyta Fastovets, Jean-Yves Guillemaut, A. Hilton","doi":"10.1145/2668904.2668938","DOIUrl":"https://doi.org/10.1145/2668904.2668938","url":null,"abstract":"This paper considers the problem of estimating human pose in challenging monocular sports videos, where manual intervention is often required in order to obtain useful results. Fully automatic approaches focus on developing inference algorithms and probabilistic prior models based on learned measurements and often face challenges in generalisation beyond the learned dataset. This work expands on the idea of using an interactive model-based generative technique for accurately estimating the human pose from uncalibrated unconstrained monocular TV sports footage. A method of keyframe propagation is introduced to obtain reliable tracking from limited operator input by introducing the concepts of keyframe propagation and optimal keyframe selection assistance for the operator. Experimental results show that the approach produces results competitive with those produced with twice the number of manually annotated keyframes, halving the amount of interaction required.","PeriodicalId":401915,"journal":{"name":"Proceedings of the 11th European Conference on Visual Media Production","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128592025","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Device effect on panoramic video+context tasks 设备对全景视频+上下文任务的影响
Pub Date : 2014-11-13 DOI: 10.1145/2668904.2668943
Fabrizio Pece, J. Tompkin, H. Pfister, J. Kautz, C. Theobalt
Panoramic imagery is viewed daily by thousands of people, and panoramic video imagery is becoming more common. This imagery is viewed on many different devices with different properties, and the effect of these differences on spatio-temporal task performance is yet untested on these imagery. We adapt a novel panoramic video interface and conduct a user study to discover whether display type affects spatio-temporal reasoning task performance across desktop monitor, tablet, and head-mounted displays. We discover that, in our complex reasoning task, HMDs are as effective as desktop displays even if participants felt less capable, but tablets were less effective than desktop displays even though participants felt just as capable. Our results impact virtual tourism, telepresence, and surveillance applications, and so we state the design implications of our results for panoramic imagery systems.
每天都有成千上万的人观看全景图像,全景视频图像正变得越来越普遍。这些图像在许多具有不同属性的不同设备上观看,这些差异对时空任务表现的影响尚未在这些图像上进行测试。我们采用了一种新颖的全景视频界面,并进行了用户研究,以发现显示类型是否会影响桌面显示器、平板电脑和头戴式显示器的时空推理任务性能。我们发现,在我们复杂的推理任务中,即使参与者觉得自己的能力不如桌面显示器,头戴式显示器也和桌面显示器一样有效,但平板电脑的效率不如桌面显示器,即使参与者觉得自己的能力一样强。我们的研究结果影响了虚拟旅游、远程呈现和监视应用,因此我们陈述了我们的研究结果对全景图像系统的设计意义。
{"title":"Device effect on panoramic video+context tasks","authors":"Fabrizio Pece, J. Tompkin, H. Pfister, J. Kautz, C. Theobalt","doi":"10.1145/2668904.2668943","DOIUrl":"https://doi.org/10.1145/2668904.2668943","url":null,"abstract":"Panoramic imagery is viewed daily by thousands of people, and panoramic video imagery is becoming more common. This imagery is viewed on many different devices with different properties, and the effect of these differences on spatio-temporal task performance is yet untested on these imagery. We adapt a novel panoramic video interface and conduct a user study to discover whether display type affects spatio-temporal reasoning task performance across desktop monitor, tablet, and head-mounted displays. We discover that, in our complex reasoning task, HMDs are as effective as desktop displays even if participants felt less capable, but tablets were less effective than desktop displays even though participants felt just as capable. Our results impact virtual tourism, telepresence, and surveillance applications, and so we state the design implications of our results for panoramic imagery systems.","PeriodicalId":401915,"journal":{"name":"Proceedings of the 11th European Conference on Visual Media Production","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130645386","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Realistic retargeting of facial video 逼真的面部视频重定向
Pub Date : 2014-11-13 DOI: 10.1145/2668904.2668935
Wolfgang Paier, M. Kettern, P. Eisert
We propose a simple method for realistic retargeting of facial performance from one shot to another. Editors can combine different takes of a shot into one single, optimal take with minimal manual labour and highly realistic results. Using a static proxy mesh of the actor's head, we obtain approximate 3D information of recorded monocular facial video. This 3D information is used to create pose-invariant textures from recorded facial action and to re-render it into a target shot. This can be done for the full face or parts of it, allowing for flexible editing.
我们提出了一种简单的方法来实现面部表现从一个镜头到另一个镜头的真实重新瞄准。编辑可以将不同的镜头组合成一个单一的,最佳的拍摄,最小的体力劳动和高度逼真的结果。利用演员头部的静态代理网格,我们获得了录制的单目面部视频的近似三维信息。这个3D信息用于创建姿势不变的纹理从记录的面部动作,并重新渲染到目标镜头。这可以完成整个脸或部分,允许灵活的编辑。
{"title":"Realistic retargeting of facial video","authors":"Wolfgang Paier, M. Kettern, P. Eisert","doi":"10.1145/2668904.2668935","DOIUrl":"https://doi.org/10.1145/2668904.2668935","url":null,"abstract":"We propose a simple method for realistic retargeting of facial performance from one shot to another. Editors can combine different takes of a shot into one single, optimal take with minimal manual labour and highly realistic results. Using a static proxy mesh of the actor's head, we obtain approximate 3D information of recorded monocular facial video. This 3D information is used to create pose-invariant textures from recorded facial action and to re-render it into a target shot. This can be done for the full face or parts of it, allowing for flexible editing.","PeriodicalId":401915,"journal":{"name":"Proceedings of the 11th European Conference on Visual Media Production","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128677277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
期刊
Proceedings of the 11th European Conference on Visual Media Production
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1