首页 > 最新文献

ACM SIGGRAPH 2016 Posters最新文献

英文 中文
GazeSim: simulating foveated rendering using depth in eye gaze for VR GazeSim:在VR中使用眼睛凝视的深度模拟注视点渲染
Pub Date : 2016-07-24 DOI: 10.1145/2945078.2945153
Yun Suen Pai, Benjamin Tag, B. Outram, Noriyasu Vontin, Kazunori Sugiura, K. Kunze
We present a novel technique of implementing customized hardware that uses eye gaze focus depth as an input modality for virtual reality applications. By utilizing eye tracking technology, our system can detect the point in depth the viewer focusses on, and therefore promises more natural responses of the eye to stimuli, which will help overcoming VR sickness and nausea. The obtained information for the depth focus of the eye allows the utilization of foveated rendering to keep the computing workload low and create a more natural image that is clear in the focused field, but blurred outside that field.
我们提出了一种实现定制硬件的新技术,该技术使用眼睛注视焦点深度作为虚拟现实应用的输入方式。通过使用眼动追踪技术,我们的系统可以检测到观看者所关注的深度点,从而保证眼睛对刺激的更自然反应,这将有助于克服VR的恶心和恶心。获得的眼睛深度焦点信息允许利用注视点渲染来降低计算工作量,并创建更自然的图像,在聚焦区域内清晰,但在该区域外模糊。
{"title":"GazeSim: simulating foveated rendering using depth in eye gaze for VR","authors":"Yun Suen Pai, Benjamin Tag, B. Outram, Noriyasu Vontin, Kazunori Sugiura, K. Kunze","doi":"10.1145/2945078.2945153","DOIUrl":"https://doi.org/10.1145/2945078.2945153","url":null,"abstract":"We present a novel technique of implementing customized hardware that uses eye gaze focus depth as an input modality for virtual reality applications. By utilizing eye tracking technology, our system can detect the point in depth the viewer focusses on, and therefore promises more natural responses of the eye to stimuli, which will help overcoming VR sickness and nausea. The obtained information for the depth focus of the eye allows the utilization of foveated rendering to keep the computing workload low and create a more natural image that is clear in the focused field, but blurred outside that field.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134023996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
Realistic 3D projection mapping using polynomial texture maps 逼真的3D投影映射使用多项式纹理贴图
Pub Date : 2016-07-24 DOI: 10.1145/2945078.2945142
Junho Choi, Jong Hun Lee, Yong Yi Lee, Yong Hwi Kim, Bilal Ahmed, M. Son, M. Joo, Kwan H. Lee
Projection mapping has been widely used to efficiently visualize real world objects in various areas such as exhibitions, advertisements, and theatrical performances. To represent the projected content in a realistic manner, the appearance of an object should be taken into consideration. Although there have been various attempts to realistically represent the appearance through digital modeling of appearance materials in computer graphics, it is difficult to combine it with the projection mapping because it takes huge amount of time and requires large space for the measurement. To counteract these challenges of time and space, [Malzbender et al. 2001] present polynomial texture maps (PTM) that can represent the reflectance properties of the surface such as diffuse and shadow artifacts by relighting of the 3D objects according to varying light direction around the object. PTM does not have temporal or spatial constraints requiring only several tens of images of different light directions so that it makes it possible to easily produce an appealing appearance.
投影映射被广泛应用于各种领域,如展览、广告和戏剧表演,以有效地可视化真实世界的物体。为了以逼真的方式表现投射的内容,应该考虑到物体的外观。虽然在计算机图形学中已经有过各种尝试,通过对外观材料的数字化建模来真实地再现外观,但由于需要耗费大量的时间和测量空间,很难将其与投影映射相结合。为了应对这些时间和空间的挑战,[Malzbender等人。2001]提出了多项式纹理贴图(PTM),它可以根据物体周围不同的光照方向重新照亮3D物体,从而表示表面的反射特性,如漫反射和阴影伪影。PTM没有时间或空间限制,只需要几十张不同光线方向的图像,因此它可以很容易地产生一个吸引人的外观。
{"title":"Realistic 3D projection mapping using polynomial texture maps","authors":"Junho Choi, Jong Hun Lee, Yong Yi Lee, Yong Hwi Kim, Bilal Ahmed, M. Son, M. Joo, Kwan H. Lee","doi":"10.1145/2945078.2945142","DOIUrl":"https://doi.org/10.1145/2945078.2945142","url":null,"abstract":"Projection mapping has been widely used to efficiently visualize real world objects in various areas such as exhibitions, advertisements, and theatrical performances. To represent the projected content in a realistic manner, the appearance of an object should be taken into consideration. Although there have been various attempts to realistically represent the appearance through digital modeling of appearance materials in computer graphics, it is difficult to combine it with the projection mapping because it takes huge amount of time and requires large space for the measurement. To counteract these challenges of time and space, [Malzbender et al. 2001] present polynomial texture maps (PTM) that can represent the reflectance properties of the surface such as diffuse and shadow artifacts by relighting of the 3D objects according to varying light direction around the object. PTM does not have temporal or spatial constraints requiring only several tens of images of different light directions so that it makes it possible to easily produce an appealing appearance.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134639421","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Interactive multi-scale oil paint filtering on mobile devices 移动设备上的交互式多尺度油画过滤
Pub Date : 2016-07-24 DOI: 10.1145/2945078.2945120
Amir Semmo, Matthias Trapp, Tobias Dürschmid, J. Döllner, S. Pasewaldt
This work presents an interactive mobile implementation of a filter that transforms images into an oil paint look. At this, a multi-scale approach that processes image pyramids is introduced that uses flow-based joint bilateral upsampling to achieve deliberate levels of abstraction at multiple scales and interactive frame rates. The approach facilitates the implementation of interactive tools that adjust the appearance of filtering effects at run-time, which is demonstrated by an on-screen painting interface for per-pixel parameterization that fosters the casual creativity of non-artists.
这项工作提出了一个交互式的过滤器的移动实现,将图像转换为油画外观。在此,介绍了一种处理图像金字塔的多尺度方法,该方法使用基于流的联合双边上采样来实现多尺度和交互帧率下的刻意抽象水平。该方法促进了在运行时调整过滤效果外观的交互式工具的实现,这通过用于逐像素参数化的屏幕绘画界面来演示,该界面可以培养非艺术家的随意创造力。
{"title":"Interactive multi-scale oil paint filtering on mobile devices","authors":"Amir Semmo, Matthias Trapp, Tobias Dürschmid, J. Döllner, S. Pasewaldt","doi":"10.1145/2945078.2945120","DOIUrl":"https://doi.org/10.1145/2945078.2945120","url":null,"abstract":"This work presents an interactive mobile implementation of a filter that transforms images into an oil paint look. At this, a multi-scale approach that processes image pyramids is introduced that uses flow-based joint bilateral upsampling to achieve deliberate levels of abstraction at multiple scales and interactive frame rates. The approach facilitates the implementation of interactive tools that adjust the appearance of filtering effects at run-time, which is demonstrated by an on-screen painting interface for per-pixel parameterization that fosters the casual creativity of non-artists.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131258177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Error-bounded surface remeshing with minimal angle elimination 最小角度消除的误差边界曲面重网格
Pub Date : 2016-07-24 DOI: 10.1145/2945078.2945138
Kaimo Hu, Dong‐Ming Yan, Bedrich Benes
Surface remeshing is a key component in many geometry processing applications. However, existing high quality remeshing methods usually introduce approximation errors that are difficult to control, while error-driven approaches pay little attention to the meshing quality. Moreover, neither of those approaches can guarantee the minimal angle bound in resulting meshes. We propose a novel error-bounded surface remeshing approach that is based on minimal angle elimination. Our method employs a dynamic priority queue that first parameterize triangles who contain angles smaller than a user-specified threshold. Then, those small angles are eliminated by applying several local operators ingeniously. To control the geometric fidelity where local operators are applied, an efficient local error measure scheme is proposed and integrated in our remeshing framework. The initial results show that the proposed approach is able to bound the geometric fidelity strictly, while the minimal angles of the results can be eliminated to be up to 40 degrees.
表面网格划分是许多几何处理应用中的关键组成部分。然而,现有的高质量重划分方法通常会引入难以控制的近似误差,而误差驱动方法对网格质量的关注较少。而且,这两种方法都不能保证最终网格的最小角度边界。提出了一种基于最小角度消去的误差有界曲面网格重划分方法。我们的方法采用了一个动态优先级队列,该队列首先参数化包含角度小于用户指定阈值的三角形。然后巧妙地应用几个局部算子来消除这些小角度。为了控制局部算子的几何保真度,提出了一种有效的局部误差测量方案,并将其集成到网格重划分框架中。初步结果表明,该方法能够严格约束几何保真度,同时可以消除结果的最小角度,最大可达40度。
{"title":"Error-bounded surface remeshing with minimal angle elimination","authors":"Kaimo Hu, Dong‐Ming Yan, Bedrich Benes","doi":"10.1145/2945078.2945138","DOIUrl":"https://doi.org/10.1145/2945078.2945138","url":null,"abstract":"Surface remeshing is a key component in many geometry processing applications. However, existing high quality remeshing methods usually introduce approximation errors that are difficult to control, while error-driven approaches pay little attention to the meshing quality. Moreover, neither of those approaches can guarantee the minimal angle bound in resulting meshes. We propose a novel error-bounded surface remeshing approach that is based on minimal angle elimination. Our method employs a dynamic priority queue that first parameterize triangles who contain angles smaller than a user-specified threshold. Then, those small angles are eliminated by applying several local operators ingeniously. To control the geometric fidelity where local operators are applied, an efficient local error measure scheme is proposed and integrated in our remeshing framework. The initial results show that the proposed approach is able to bound the geometric fidelity strictly, while the minimal angles of the results can be eliminated to be up to 40 degrees.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129850464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Computational swept volume light painting via robotic non-linear motion 通过机器人非线性运动的计算扫描体光绘制
Pub Date : 2016-07-24 DOI: 10.1145/2945078.2945105
Yaozhun Huang, Sze-Chun Tsang, Miu-Ling Lam
Light painting is a photography technique in which light sources are moved in specific patterns while being captured by long exposure. The movements of lights will result in bright strokes or selectively illuminated and colored areas in the scene being captured, thus decorating the real scene with special visual effects without the need for post-production. Light painting is not only a popular activity for hobbyists to express creativities, but also a practice for professional media artists and photographers to produce aesthetic visual arts and commercial photography. In conventional light paintings, the light sources are usually flashlights or other simple handheld lights made by attaching one or multiple LEDs to a stick or a ring. The patterns created are limited to abstract shapes or freehand strokes.
光画是一种摄影技术,其中光源以特定的模式移动,同时通过长时间曝光捕捉。灯光的运动将导致明亮的笔触或有选择地在场景中照明和着色区域被捕获,从而在不需要后期制作的情况下以特殊的视觉效果装饰真实场景。光画不仅是业余爱好者表达创意的流行活动,也是专业媒体艺术家和摄影师生产美学视觉艺术和商业摄影的实践。在传统的光画中,光源通常是手电筒或其他简单的手持灯,由一个或多个led连接到一根棍子或一个环。所创造的图案仅限于抽象形状或徒手笔画。
{"title":"Computational swept volume light painting via robotic non-linear motion","authors":"Yaozhun Huang, Sze-Chun Tsang, Miu-Ling Lam","doi":"10.1145/2945078.2945105","DOIUrl":"https://doi.org/10.1145/2945078.2945105","url":null,"abstract":"Light painting is a photography technique in which light sources are moved in specific patterns while being captured by long exposure. The movements of lights will result in bright strokes or selectively illuminated and colored areas in the scene being captured, thus decorating the real scene with special visual effects without the need for post-production. Light painting is not only a popular activity for hobbyists to express creativities, but also a practice for professional media artists and photographers to produce aesthetic visual arts and commercial photography. In conventional light paintings, the light sources are usually flashlights or other simple handheld lights made by attaching one or multiple LEDs to a stick or a ring. The patterns created are limited to abstract shapes or freehand strokes.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130279861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Guessing objects in context 在语境中猜测物体
Pub Date : 2016-07-24 DOI: 10.1145/2945078.2945161
Karan Sharma, Arun C. S. Kumar, S. Bhandarkar
Large scale object classification has seen commendable progress owing, in large part, to recent advances in deep learning. However, generating annotated training datasets is still a significant challenge, especially when training classifiers for large number of object categories. In these situations, generating training datasets is expensive coupled with the fact that training data may not be available for all categories and situations. Such situations are generally resolved using zero-shot learning. However, training zero-shot classifiers entails serious programming effort and is not scalable to very large number of object categories. We propose a novel simple framework that can guess objects in an image. The proposed framework has the advantages of scalability and ease of use with minimal loss in accuracy. The proposed framework answers the following question: How does one guess objects in an image from very few object detections?
在很大程度上,由于深度学习的最新进展,大规模对象分类取得了值得称道的进展。然而,生成带注释的训练数据集仍然是一个重大挑战,特别是在为大量对象类别训练分类器时。在这些情况下,生成训练数据集是昂贵的,而且训练数据可能无法用于所有类别和情况。这种情况通常使用零射击学习来解决。然而,训练零射击分类器需要大量的编程工作,并且不能扩展到非常大量的对象类别。我们提出了一种新的简单框架,可以猜测图像中的物体。该框架具有可扩展性强、易于使用、精度损失小等优点。提出的框架回答了以下问题:如何从很少的物体检测中猜测图像中的物体?
{"title":"Guessing objects in context","authors":"Karan Sharma, Arun C. S. Kumar, S. Bhandarkar","doi":"10.1145/2945078.2945161","DOIUrl":"https://doi.org/10.1145/2945078.2945161","url":null,"abstract":"Large scale object classification has seen commendable progress owing, in large part, to recent advances in deep learning. However, generating annotated training datasets is still a significant challenge, especially when training classifiers for large number of object categories. In these situations, generating training datasets is expensive coupled with the fact that training data may not be available for all categories and situations. Such situations are generally resolved using zero-shot learning. However, training zero-shot classifiers entails serious programming effort and is not scalable to very large number of object categories. We propose a novel simple framework that can guess objects in an image. The proposed framework has the advantages of scalability and ease of use with minimal loss in accuracy. The proposed framework answers the following question: How does one guess objects in an image from very few object detections?","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115996284","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards real-time insect motion capture 朝着实时昆虫动作捕捉的方向发展
Pub Date : 2016-07-24 DOI: 10.1145/2945078.2945115
Deschanel Li
It is currently possible to reliably motion-track humans and some animals, but not possible to track insects using standard motion tracking techniques. By programming a virtual prototype rig/skeleton for the insects small scale creatures will be able to be tracked in real time. Possible applications include behavioural research of animals and entertainment industry, e.g., when realistic insect motion simulation is needed and insects cannot be outfitted with sensors like humans for animation in movies or games.
目前可以可靠地跟踪人类和一些动物的运动,但不可能使用标准的运动跟踪技术来跟踪昆虫。通过为昆虫编程一个虚拟原型装置/骨架,小型生物将能够被实时跟踪。可能的应用包括动物行为研究和娱乐行业,例如,当需要真实的昆虫运动模拟时,昆虫不能像电影或游戏中的动画那样配备传感器。
{"title":"Towards real-time insect motion capture","authors":"Deschanel Li","doi":"10.1145/2945078.2945115","DOIUrl":"https://doi.org/10.1145/2945078.2945115","url":null,"abstract":"It is currently possible to reliably motion-track humans and some animals, but not possible to track insects using standard motion tracking techniques. By programming a virtual prototype rig/skeleton for the insects small scale creatures will be able to be tracked in real time. Possible applications include behavioural research of animals and entertainment industry, e.g., when realistic insect motion simulation is needed and insects cannot be outfitted with sensors like humans for animation in movies or games.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"1 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133037303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Body-part motion synthesis system for contemporary dance creation 当代舞蹈创作的肢体动作合成系统
Pub Date : 2016-07-24 DOI: 10.1145/2945078.2945107
A. Soga, Yuho Yazaki, Bin Umino, M. Hirayama
We developed a body-part motion synthesis system (BMSS) that allows users to create short choreographies by synthesizing body-part motions and to simulate them in 3D animation. This system automatically provides various short choreographies. First, users select a base motion and body-part categories. Then the system automatically selects and synthesizes body-part motions to the base motion. The system randomly determined the synthesis timings of the selected motions. Users can use the composed sequences as references for dance creation, learning, and training. We experimentally evaluated our system's effectiveness for supporting dance creation with four professional choreographers of contemporary dance. From our experiment results, we basically verified the usability of BMSS for choreographic creation.
我们开发了一个身体部分运动合成系统(BMSS),允许用户通过合成身体部分运动来创建简短的舞蹈,并在3D动画中模拟它们。该系统自动提供各种短编排。首先,用户选择基本动作和身体部位类别。然后系统自动选择身体部分的运动并将其合成为基本运动。系统随机确定所选动作的合成时间。用户可以将合成的序列作为舞蹈创作、学习和训练的参考。我们以四位专业现代舞编导为实验对象,评估了系统对舞蹈创作的支持效果。从实验结果来看,我们基本验证了BMSS在编舞创作中的可用性。
{"title":"Body-part motion synthesis system for contemporary dance creation","authors":"A. Soga, Yuho Yazaki, Bin Umino, M. Hirayama","doi":"10.1145/2945078.2945107","DOIUrl":"https://doi.org/10.1145/2945078.2945107","url":null,"abstract":"We developed a body-part motion synthesis system (BMSS) that allows users to create short choreographies by synthesizing body-part motions and to simulate them in 3D animation. This system automatically provides various short choreographies. First, users select a base motion and body-part categories. Then the system automatically selects and synthesizes body-part motions to the base motion. The system randomly determined the synthesis timings of the selected motions. Users can use the composed sequences as references for dance creation, learning, and training. We experimentally evaluated our system's effectiveness for supporting dance creation with four professional choreographers of contemporary dance. From our experiment results, we basically verified the usability of BMSS for choreographic creation.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124650402","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Video reshuffling: automatic video dubbing without prior knowledge 视频重洗牌:无需事先了解即可自动进行视频配音
Pub Date : 2016-07-24 DOI: 10.1145/2945078.2945097
Shoichi Furukawa, Takuya Kato, Pavel A. Savkin, S. Morishima
Numerous video have been translated using "dubbing," spurred by the recent growth of video market. However, it is very difficult to achieve the visual-audio synchronization. That is to say in general a new audio does not synchronize with actor's mouth motion. This discrepancy can disturb comprehension of video contents. There-fore many methods have been researched so far to solve this problem.
随着最近视频市场的发展,很多视频都被翻译成“配音”。然而,实现视音频同步是非常困难的。也就是说,一般来说,新的音频不会与演员的嘴部动作同步。这种差异会影响对视频内容的理解。因此,迄今为止已经研究了许多方法来解决这一问题。
{"title":"Video reshuffling: automatic video dubbing without prior knowledge","authors":"Shoichi Furukawa, Takuya Kato, Pavel A. Savkin, S. Morishima","doi":"10.1145/2945078.2945097","DOIUrl":"https://doi.org/10.1145/2945078.2945097","url":null,"abstract":"Numerous video have been translated using \"dubbing,\" spurred by the recent growth of video market. However, it is very difficult to achieve the visual-audio synchronization. That is to say in general a new audio does not synchronize with actor's mouth motion. This discrepancy can disturb comprehension of video contents. There-fore many methods have been researched so far to solve this problem.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122509618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
From drawing to animation-ready vector graphics 从绘图到动画准备矢量图形
Pub Date : 2016-07-24 DOI: 10.1145/2945078.2945130
Even Entem, L. Barthe, Marie-Paule Cani, M. V. D. Panne
We present an automatic method to build a layered vector graphics structure ready for animation from a clean-line vector drawing of an organic, smooth shape. Inspiring from 3D segmentation methods, we introduce a new metric computed on the medial axis of a region to identify and quantify the visual salience of a sub-region relative to the rest. This enables us to recursively separate each region into two closed sub-regions at the location of the most salient junction. The resulting structure, layered in depth, can be used to pose and animate the drawing using a regular 2D skeleton.
我们提出了一种自动方法来构建一个分层的矢量图形结构,准备从一个有机的,光滑的形状的干净线矢量绘图动画。受3D分割方法的启发,我们引入了在区域的中轴线上计算的新度量,以识别和量化子区域相对于其他区域的视觉显著性。这使我们能够递归地将每个区域在最显著交界处的位置分成两个封闭的子区域。由此产生的结构,分层的深度,可以用来摆姿势和动画绘图使用常规的2D骨架。
{"title":"From drawing to animation-ready vector graphics","authors":"Even Entem, L. Barthe, Marie-Paule Cani, M. V. D. Panne","doi":"10.1145/2945078.2945130","DOIUrl":"https://doi.org/10.1145/2945078.2945130","url":null,"abstract":"We present an automatic method to build a layered vector graphics structure ready for animation from a clean-line vector drawing of an organic, smooth shape. Inspiring from 3D segmentation methods, we introduce a new metric computed on the medial axis of a region to identify and quantify the visual salience of a sub-region relative to the rest. This enables us to recursively separate each region into two closed sub-regions at the location of the most salient junction. The resulting structure, layered in depth, can be used to pose and animate the drawing using a regular 2D skeleton.","PeriodicalId":417667,"journal":{"name":"ACM SIGGRAPH 2016 Posters","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124209613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
ACM SIGGRAPH 2016 Posters
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1