首页 > 最新文献

ACM SIGGRAPH 2015 Posters最新文献

英文 中文
A music video authoring system synchronizing climax of video clips and music via rearrangement of musical bars 一种音乐视频创作系统,通过重新排列音乐条来同步视频剪辑和音乐的高潮
Pub Date : 2015-07-31 DOI: 10.1145/2787626.2792608
Haruki Sato, T. Hirai, Tomoyasu Nakano, Masataka Goto, S. Morishima
This paper presents a system that can automatically add a soundtrack to a video clip by replacing and concatenating an existing song's musical bars considering a user's preference. Since a soundtrack makes a video clip attractive, adding a soundtrack to a clip is one of the most important processes in video editing. To make a video clip more attractive, an editor of the clip tends to add a soundtrack considering its timing and climax. For example, editors often add chorus sections to the climax of the clip by replacing and concatenating musical bars in an existing song. However, in the process, editors should take naturalness of rearranged soundtrack into account. Therefore, editors have to decide how to replace musical bars in a song considering its timing, climax, and naturalness of rearranged soundtrack simultaneously. In this case, editors are required to optimize the soundtrack by listening to the rearranged result as well as checking the naturalness and synchronization between the result and the video clip. However, this repetitious work is time-consuming. [Feng et al. 2010] proposed an automatic soundtrack addition method. However, since this method automatically adds soundtrack with data-driven approach, this method cannot consider timing and climax which a user prefers.
本文提出了一个系统,该系统可以根据用户的偏好,通过替换和连接现有歌曲的音乐条,自动为视频剪辑添加配乐。由于配乐使视频剪辑具有吸引力,因此在剪辑中添加配乐是视频编辑中最重要的过程之一。为了让视频片段更有吸引力,剪辑师往往会考虑到它的时间和高潮,添加配乐。例如,编辑通常通过替换和连接现有歌曲中的音乐小节,在剪辑的高潮部分添加合唱部分。但是,在这个过程中,编辑要考虑到重新编排的原声的自然度。因此,编辑必须同时考虑歌曲的时机、高潮和重新编排的原声的自然性,决定如何替换歌曲中的音乐小节。在这种情况下,编辑需要通过聆听重新排列的结果来优化配乐,并检查结果与视频剪辑之间的自然度和同步度。然而,这种重复的工作非常耗时。[Feng等。2010]提出了一种自动配乐添加方法。然而,由于这种方法是自动添加配乐的数据驱动方法,这种方法不能考虑用户喜欢的时机和高潮。
{"title":"A music video authoring system synchronizing climax of video clips and music via rearrangement of musical bars","authors":"Haruki Sato, T. Hirai, Tomoyasu Nakano, Masataka Goto, S. Morishima","doi":"10.1145/2787626.2792608","DOIUrl":"https://doi.org/10.1145/2787626.2792608","url":null,"abstract":"This paper presents a system that can automatically add a soundtrack to a video clip by replacing and concatenating an existing song's musical bars considering a user's preference. Since a soundtrack makes a video clip attractive, adding a soundtrack to a clip is one of the most important processes in video editing. To make a video clip more attractive, an editor of the clip tends to add a soundtrack considering its timing and climax. For example, editors often add chorus sections to the climax of the clip by replacing and concatenating musical bars in an existing song. However, in the process, editors should take naturalness of rearranged soundtrack into account. Therefore, editors have to decide how to replace musical bars in a song considering its timing, climax, and naturalness of rearranged soundtrack simultaneously. In this case, editors are required to optimize the soundtrack by listening to the rearranged result as well as checking the naturalness and synchronization between the result and the video clip. However, this repetitious work is time-consuming. [Feng et al. 2010] proposed an automatic soundtrack addition method. However, since this method automatically adds soundtrack with data-driven approach, this method cannot consider timing and climax which a user prefers.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127764543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Mobile collaborative augmented reality with real-time AR/VR switching 移动协作增强现实与实时AR/VR切换
Pub Date : 2015-07-31 DOI: 10.1145/2787626.2792662
Prashanth Bollam, Eesha Gothwal, G. B. C. S. T. Vinnakota, Shailesh Kumar, Soumyajit Deb
The recent boom in computing capabilities of mobile devices has led to the introduction of Virtual Reality into the mobile ecosystem. We demonstrate a framework for the Samsung Gear VR headset that allows developers to create a totally immersive AR & VR experience with no need for interfacing with external devices or cables thereby making it a truly autonomous mobile VR experience. The significant benefits of this system over existing ones are - a fully hands free experience where hands could be used for gesture based input, the ability to use the Head Mounted Display (HMD) sensor for improved head and positional tracking and automatic peer to peer network creation for communication between phones. The most important factor in our system is to provide an intuitive way to interact with virtual objects in AR and VR. And users should be able to switch from AR to VR world and vice versa seamlessly.
最近移动设备计算能力的蓬勃发展导致了虚拟现实技术进入移动生态系统。我们为三星Gear VR头显展示了一个框架,允许开发人员创建一个完全身临其境的AR和VR体验,而不需要与外部设备或电缆接口,从而使其成为一个真正自主的移动VR体验。与现有系统相比,该系统的显著优势在于:完全解放双手的体验,双手可以用于基于手势的输入;能够使用头戴式显示器(HMD)传感器来改进头部和位置跟踪;以及在手机上自动创建点对点网络进行通信。我们的系统中最重要的因素是提供一种直观的方式来与AR和VR中的虚拟对象进行交互。用户应该能够从AR世界无缝切换到VR世界,反之亦然。
{"title":"Mobile collaborative augmented reality with real-time AR/VR switching","authors":"Prashanth Bollam, Eesha Gothwal, G. B. C. S. T. Vinnakota, Shailesh Kumar, Soumyajit Deb","doi":"10.1145/2787626.2792662","DOIUrl":"https://doi.org/10.1145/2787626.2792662","url":null,"abstract":"The recent boom in computing capabilities of mobile devices has led to the introduction of Virtual Reality into the mobile ecosystem. We demonstrate a framework for the Samsung Gear VR headset that allows developers to create a totally immersive AR & VR experience with no need for interfacing with external devices or cables thereby making it a truly autonomous mobile VR experience. The significant benefits of this system over existing ones are - a fully hands free experience where hands could be used for gesture based input, the ability to use the Head Mounted Display (HMD) sensor for improved head and positional tracking and automatic peer to peer network creation for communication between phones. The most important factor in our system is to provide an intuitive way to interact with virtual objects in AR and VR. And users should be able to switch from AR to VR world and vice versa seamlessly.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134318494","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
V3: an interactive real-time visualization of vocal vibrations V3:声音振动的交互式实时可视化
Pub Date : 2015-07-31 DOI: 10.1145/2787626.2792624
Rébecca Kleinberger
Our voice is an important part of our individuality but the relationship we have with our own voice is not obvious. We don't hear it the same way others do, and our brain treats it differently from any other sound we hear [Houde et al. 2002]. Yet its sonority is highly linked to our body and mind, and deeply connected with how we are perceived by society and how we see ourselves. The V3 system (Vocal Vibrations Visualization) offers a interactive visualization of vocal vibration patterns. We developed the hexauscultation mask, a head set sensor that measures bioacoustic signals from the voice at 6 points of the face and throat. Those signals are sent and processed to offer a real-time visualization of the relative vibration intensities at the 6 measured points. This system can be used in various situations such as vocal training, tool design for the deaf community, design of HCI for speech disorder treatment and prosody acquisition but also simply for personal vocal exploration.
我们的声音是我们个性的重要组成部分,但我们与自己的声音的关系并不明显。我们听到的声音和其他人听到的不一样,我们的大脑对待它的方式和我们听到的任何其他声音都不一样[Houde et al. 2002]。然而,它的声音与我们的身心密切相关,与社会如何看待我们以及我们如何看待自己密切相关。V3系统(声音振动可视化)提供了声音振动模式的交互式可视化。我们开发了六听诊面罩,一种头戴式传感器,可以测量面部和喉咙的6个点的声音生物声学信号。这些信号被发送和处理,以提供6个测点的相对振动强度的实时可视化。该系统可用于各种场合,如声乐训练,聋人社区的工具设计,语言障碍治疗和韵律习得的HCI设计,也可用于个人声乐探索。
{"title":"V3: an interactive real-time visualization of vocal vibrations","authors":"Rébecca Kleinberger","doi":"10.1145/2787626.2792624","DOIUrl":"https://doi.org/10.1145/2787626.2792624","url":null,"abstract":"Our voice is an important part of our individuality but the relationship we have with our own voice is not obvious. We don't hear it the same way others do, and our brain treats it differently from any other sound we hear [Houde et al. 2002]. Yet its sonority is highly linked to our body and mind, and deeply connected with how we are perceived by society and how we see ourselves. The V3 system (Vocal Vibrations Visualization) offers a interactive visualization of vocal vibration patterns. We developed the hexauscultation mask, a head set sensor that measures bioacoustic signals from the voice at 6 points of the face and throat. Those signals are sent and processed to offer a real-time visualization of the relative vibration intensities at the 6 measured points. This system can be used in various situations such as vocal training, tool design for the deaf community, design of HCI for speech disorder treatment and prosody acquisition but also simply for personal vocal exploration.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134063215","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Inferring gaze shifts from captured body motion 从捕捉到的身体动作推断目光的转移
Pub Date : 2015-07-31 DOI: 10.1145/2787626.2787663
D. Rakita, T. Pejsa, Bilge Mutlu, Michael Gleicher
Motion-captured performances seldom include eye gaze, because capturing this motion requires eye tracking technology that is not typically part of a motion capture setup. Yet having eye gaze information is important, as it tells us what the actor was attending to during capture and it adds to the expressivity of their performance.
动作捕捉表演很少包括眼睛注视,因为捕捉这个动作需要眼球追踪技术,而这通常不是动作捕捉设置的一部分。然而,眼睛注视的信息是很重要的,因为它告诉我们演员在捕捉时关注的是什么,它增加了他们表演的表现力。
{"title":"Inferring gaze shifts from captured body motion","authors":"D. Rakita, T. Pejsa, Bilge Mutlu, Michael Gleicher","doi":"10.1145/2787626.2787663","DOIUrl":"https://doi.org/10.1145/2787626.2787663","url":null,"abstract":"Motion-captured performances seldom include eye gaze, because capturing this motion requires eye tracking technology that is not typically part of a motion capture setup. Yet having eye gaze information is important, as it tells us what the actor was attending to during capture and it adds to the expressivity of their performance.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131744074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Increasing realism of animated grass in real-time game environments 增加实时游戏环境中草动画的真实感
Pub Date : 2015-07-31 DOI: 10.1145/2787626.2787660
Benjamin Knowles, O. Fryazinov
With the increasing quality of real-time graphics it is vital to make sure assets move in a convincing manner otherwise the players immersion can be broken. Grass is an important area as it can move substantially and often takes up a large portion of screen space in games. Animation of grass is a subject to academic research [Fernando 2004; Perbet and Cani 2001] as well as a technology which is implemented in a number of video games. The list includes, but is not limited to, games such as Far Cry 4, Battlefield 4, Dear Esther and Unigine Valley. Comparing video games assets with reality, it can be seen that the current methods have a number of problems which decrease the realism of the resulting grass animation. These problems include: 1) the visible planar nature of grass geometry and 2) problems with the grass movement which include over-connectivity of grass blades in respect to their neighbours, no obvious wind direction and exaggerated swaying motions. In this paper we propose to increase realism of the grass by focusing on its movement. The main contributions of this work are: 1) Distinguishing ambient and directional components of the wind and 2) The method for calculating directional wind by using a grayscale map and wind vector. The grass was implemented with vertex shaders in line with the majority of methods described in academic literature (e.g. [Fernando 2004]) and implemented in modern games.
随着实时图像质量的提高,确保资产以令人信服的方式移动变得至关重要,否则就会破坏玩家的沉浸感。草地是一个很重要的区域,因为它可以移动,并且在游戏中占据很大一部分屏幕空间。草的动画是一个学术研究课题[Fernando 2004;Perbet和Cani 2001]以及一项在许多电子游戏中实施的技术。该列表包括但不限于《孤岛惊魂4》、《战地4》、《Dear Esther》和《Unigine Valley》等游戏。将电子游戏资产与现实进行比较,可以看出当前的方法存在许多问题,这些问题降低了生成的草动画的真实感。这些问题包括:1)草几何的可见平面性质;2)草的运动问题,包括草叶相对于其邻居的过度连接,没有明显的风向和夸张的摇摆运动。在本文中,我们建议通过关注草的运动来增加草的真实感。本工作的主要贡献是:1)区分风的环境分量和方向分量;2)利用灰度图和风矢量计算方向风的方法。草是用顶点着色器实现的,这与学术文献中描述的大多数方法一致(例如[Fernando 2004]),并在现代游戏中实现。
{"title":"Increasing realism of animated grass in real-time game environments","authors":"Benjamin Knowles, O. Fryazinov","doi":"10.1145/2787626.2787660","DOIUrl":"https://doi.org/10.1145/2787626.2787660","url":null,"abstract":"With the increasing quality of real-time graphics it is vital to make sure assets move in a convincing manner otherwise the players immersion can be broken. Grass is an important area as it can move substantially and often takes up a large portion of screen space in games. Animation of grass is a subject to academic research [Fernando 2004; Perbet and Cani 2001] as well as a technology which is implemented in a number of video games. The list includes, but is not limited to, games such as Far Cry 4, Battlefield 4, Dear Esther and Unigine Valley. Comparing video games assets with reality, it can be seen that the current methods have a number of problems which decrease the realism of the resulting grass animation. These problems include: 1) the visible planar nature of grass geometry and 2) problems with the grass movement which include over-connectivity of grass blades in respect to their neighbours, no obvious wind direction and exaggerated swaying motions. In this paper we propose to increase realism of the grass by focusing on its movement. The main contributions of this work are: 1) Distinguishing ambient and directional components of the wind and 2) The method for calculating directional wind by using a grayscale map and wind vector. The grass was implemented with vertex shaders in line with the majority of methods described in academic literature (e.g. [Fernando 2004]) and implemented in modern games.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":"130 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133274530","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Display of diamond dispersion using wavelength-division rendering and integral photography 用波分渲染和积分摄影显示钻石色散
Pub Date : 2015-07-31 DOI: 10.1145/2787626.2792642
Nahomi Maki, K. Yanaka
Various colors, such as in a prism, are observed in properly cut diamond even under white light because of dispersion. Properly-cut diamond brings about scintillation when viewing angle is changed, because total reflection inside a diamond tends to occur frequently due to the large refractive index. Moreover, strong rainbow colors are seen because of high dispersion ratio.
即使在白光下,由于色散,在适当切割的钻石上也能观察到各种颜色,例如在棱镜中。当观察角度改变时,切割得当的金刚石会产生闪烁,这是由于金刚石的折射率大,其内部容易发生全反射。此外,由于色散比高,可以看到强烈的彩虹色。
{"title":"Display of diamond dispersion using wavelength-division rendering and integral photography","authors":"Nahomi Maki, K. Yanaka","doi":"10.1145/2787626.2792642","DOIUrl":"https://doi.org/10.1145/2787626.2792642","url":null,"abstract":"Various colors, such as in a prism, are observed in properly cut diamond even under white light because of dispersion. Properly-cut diamond brings about scintillation when viewing angle is changed, because total reflection inside a diamond tends to occur frequently due to the large refractive index. Moreover, strong rainbow colors are seen because of high dispersion ratio.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":"237 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116330968","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Fully automatic ID mattes with support for motion blur and transparency 全自动ID磨砂与支持运动模糊和透明度
Pub Date : 2015-07-31 DOI: 10.1145/2787626.2787629
J. Friedman, Andrew C. Jones
In 3D production for commercials, television, and film, ID mattes are commonly used to modify rendered images without re-rendering. ID mattes are bitmap images used to isolate specific objects, or multiple objects, such as all of the buttons on a shirt. Many 3D pipelines are built to provide compositors with ID mattes in addition to beauty renders to allow flexibility.
在商业广告、电视和电影的3D制作中,ID磨砂通常用于修改已渲染的图像而无需重新渲染。ID磨砂是用于隔离特定对象或多个对象的位图图像,例如衬衫上的所有纽扣。许多3D管道的建立是为了向合成器提供ID磨砂,除了美丽的渲染,以允许灵活性。
{"title":"Fully automatic ID mattes with support for motion blur and transparency","authors":"J. Friedman, Andrew C. Jones","doi":"10.1145/2787626.2787629","DOIUrl":"https://doi.org/10.1145/2787626.2787629","url":null,"abstract":"In 3D production for commercials, television, and film, ID mattes are commonly used to modify rendered images without re-rendering. ID mattes are bitmap images used to isolate specific objects, or multiple objects, such as all of the buttons on a shirt. Many 3D pipelines are built to provide compositors with ID mattes in addition to beauty renders to allow flexibility.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":"94 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116837353","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Jigsaw: multi-modal big data management in digital film production 拼图:数字电影制作中的多模态大数据管理
Pub Date : 2015-07-31 DOI: 10.1145/2787626.2792617
S. Pabst, Hansung Kim, L. Polok, V. Ila, Ted Waine, A. Hilton, J. Clifford
Modern digital film production uses large quantities of data captured on-set, such as videos, digital photographs, LIDAR scans, spherical photography and many other sources to create the final film frames. The processing and management of this massive amount of heterogeneous data consumes enormous resources. We propose an integrated pipeline for 2D/3D data registration aimed at film production, based around the prototype application Jigsaw. It allows users to efficiently manage and process various data types from digital photographs to 3D point clouds. A key step in the use of multi-modal 2D/3D data for content production is the registration into a common coordinate frame (match moving). 3D geometric information is reconstructed from 2D data and registered to the reference 3D models using 3D feature matching [Kim and Hilton 2014]. We present several highly efficient and robust approaches to this problem. Additionally, we have developed and integrated a fast algorithm for incremental marginal covariance calculation [Ila et al. 2015]. This allows us to estimate and visualize the 3D reconstruction error directly on-set, where insufficient coverage or other problems can be addressed right away. We describe the fast hybrid multi-core and GPU accelerated techniques that let us run these algorithms on a laptop. Jigsaw has been used and evaluated in several major digital film productions and significantly reduced the time and work required to manage and process on-set data.
现代数字电影制作使用在现场捕获的大量数据,如视频、数字照片、激光雷达扫描、球形摄影和许多其他来源来创建最终的电影框架。处理和管理这些海量的异构数据需要消耗大量的资源。我们提出了一个针对电影制作的2D/3D数据注册的集成管道,基于原型应用程序Jigsaw。它允许用户有效地管理和处理从数字照片到3D点云的各种数据类型。在内容制作中使用多模态2D/3D数据的关键步骤是注册到一个公共坐标框架(匹配移动)。3D几何信息从2D数据重建,并使用3D特征匹配注册到参考3D模型[Kim and Hilton 2014]。我们提出了几个高效和健壮的方法来解决这个问题。此外,我们开发并集成了一种用于增量边际协方差计算的快速算法[Ila et al. 2015]。这使我们能够直接在现场估计和可视化3D重建误差,在覆盖不足或其他问题可以立即解决的地方。我们描述了快速混合多核和GPU加速技术,使我们能够在笔记本电脑上运行这些算法。Jigsaw已经在几个主要的数字电影制作中使用和评估,并显着减少了管理和处理现场数据所需的时间和工作。
{"title":"Jigsaw: multi-modal big data management in digital film production","authors":"S. Pabst, Hansung Kim, L. Polok, V. Ila, Ted Waine, A. Hilton, J. Clifford","doi":"10.1145/2787626.2792617","DOIUrl":"https://doi.org/10.1145/2787626.2792617","url":null,"abstract":"Modern digital film production uses large quantities of data captured on-set, such as videos, digital photographs, LIDAR scans, spherical photography and many other sources to create the final film frames. The processing and management of this massive amount of heterogeneous data consumes enormous resources. We propose an integrated pipeline for 2D/3D data registration aimed at film production, based around the prototype application Jigsaw. It allows users to efficiently manage and process various data types from digital photographs to 3D point clouds. A key step in the use of multi-modal 2D/3D data for content production is the registration into a common coordinate frame (match moving). 3D geometric information is reconstructed from 2D data and registered to the reference 3D models using 3D feature matching [Kim and Hilton 2014]. We present several highly efficient and robust approaches to this problem. Additionally, we have developed and integrated a fast algorithm for incremental marginal covariance calculation [Ila et al. 2015]. This allows us to estimate and visualize the 3D reconstruction error directly on-set, where insufficient coverage or other problems can be addressed right away. We describe the fast hybrid multi-core and GPU accelerated techniques that let us run these algorithms on a laptop. Jigsaw has been used and evaluated in several major digital film productions and significantly reduced the time and work required to manage and process on-set data.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115539387","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Art directed rendering & shading using control images 艺术指导渲染和阴影使用控制图像
Pub Date : 2015-07-31 DOI: 10.1145/2787626.2792612
E. Akleman, Siran Liu, D. House
In this work, we present a simple mathematical approach to art directed shader development. We have tested this approach over two semesters in an introductory level graduate rendering & shading class at Texas A&M University. The students in the class each chose an artist's style to mimic, and then easily created rendered images strongly resembling that style (see Figures 1). The method provides shader developers an intuitive process, giving them a high level of visual control in the creation of stylized depictions.
在这项工作中,我们提出了一个简单的数学方法来指导着色器的艺术发展。我们已经在德克萨斯A&M大学的入门级研究生渲染和着色课上测试了这个方法两个学期。课堂上的学生每个人都选择了一个艺术家的风格来模仿,然后很容易地创建渲染图像强烈类似于这种风格(见图1)。该方法为着色器开发人员提供了一个直观的过程,在创建风格化的描述时给他们一个高水平的视觉控制。
{"title":"Art directed rendering & shading using control images","authors":"E. Akleman, Siran Liu, D. House","doi":"10.1145/2787626.2792612","DOIUrl":"https://doi.org/10.1145/2787626.2792612","url":null,"abstract":"In this work, we present a simple mathematical approach to art directed shader development. We have tested this approach over two semesters in an introductory level graduate rendering & shading class at Texas A&M University. The students in the class each chose an artist's style to mimic, and then easily created rendered images strongly resembling that style (see Figures 1). The method provides shader developers an intuitive process, giving them a high level of visual control in the creation of stylized depictions.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128179133","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fractured 3D object restoration and completion 断裂的3D物体恢复和完成
Pub Date : 2015-07-31 DOI: 10.1145/2787626.2792633
Anthousis Andreadis, Robert Gregor, I. Sipiran, P. Mavridis, Georgios Papaioannou, T. Schreck
The problem of object restoration from eroded fragments where large parts could be missing is of high relevance in archaeology. Manual restoration is possible and common in practice but it is a tedious and error-prone process, which does not scale well. Solutions for specific parts of the problem have been proposed but a complete reassembly and repair pipeline is absent from the bibliography. We propose a shape restoration pipeline consisting of appropriate methods for automatic fragment reassembly and shape completion. We demonstrate the effectiveness of our approach using real-world fractured objects.
从可能丢失大部分的侵蚀碎片中修复物体的问题在考古学中具有很高的相关性。在实践中,手动恢复是可能的,也是常见的,但它是一个冗长且容易出错的过程,而且不能很好地扩展。已经提出了解决问题的具体部分,但在参考书目中没有完整的重组和维修管道。我们提出了一个形状恢复管道,包括适当的方法来自动碎片重组和形状补全。我们用真实世界的断裂物体证明了我们方法的有效性。
{"title":"Fractured 3D object restoration and completion","authors":"Anthousis Andreadis, Robert Gregor, I. Sipiran, P. Mavridis, Georgios Papaioannou, T. Schreck","doi":"10.1145/2787626.2792633","DOIUrl":"https://doi.org/10.1145/2787626.2792633","url":null,"abstract":"The problem of object restoration from eroded fragments where large parts could be missing is of high relevance in archaeology. Manual restoration is possible and common in practice but it is a tedious and error-prone process, which does not scale well. Solutions for specific parts of the problem have been proposed but a complete reassembly and repair pipeline is absent from the bibliography. We propose a shape restoration pipeline consisting of appropriate methods for automatic fragment reassembly and shape completion. We demonstrate the effectiveness of our approach using real-world fractured objects.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128488664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
期刊
ACM SIGGRAPH 2015 Posters
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1