首页 > 最新文献

ACM SIGGRAPH 2015 Posters最新文献

英文 中文
A music video authoring system synchronizing climax of video clips and music via rearrangement of musical bars 一种音乐视频创作系统,通过重新排列音乐条来同步视频剪辑和音乐的高潮
Pub Date : 2015-07-31 DOI: 10.1145/2787626.2792608
Haruki Sato, T. Hirai, Tomoyasu Nakano, Masataka Goto, S. Morishima
This paper presents a system that can automatically add a soundtrack to a video clip by replacing and concatenating an existing song's musical bars considering a user's preference. Since a soundtrack makes a video clip attractive, adding a soundtrack to a clip is one of the most important processes in video editing. To make a video clip more attractive, an editor of the clip tends to add a soundtrack considering its timing and climax. For example, editors often add chorus sections to the climax of the clip by replacing and concatenating musical bars in an existing song. However, in the process, editors should take naturalness of rearranged soundtrack into account. Therefore, editors have to decide how to replace musical bars in a song considering its timing, climax, and naturalness of rearranged soundtrack simultaneously. In this case, editors are required to optimize the soundtrack by listening to the rearranged result as well as checking the naturalness and synchronization between the result and the video clip. However, this repetitious work is time-consuming. [Feng et al. 2010] proposed an automatic soundtrack addition method. However, since this method automatically adds soundtrack with data-driven approach, this method cannot consider timing and climax which a user prefers.
本文提出了一个系统,该系统可以根据用户的偏好,通过替换和连接现有歌曲的音乐条,自动为视频剪辑添加配乐。由于配乐使视频剪辑具有吸引力,因此在剪辑中添加配乐是视频编辑中最重要的过程之一。为了让视频片段更有吸引力,剪辑师往往会考虑到它的时间和高潮,添加配乐。例如,编辑通常通过替换和连接现有歌曲中的音乐小节,在剪辑的高潮部分添加合唱部分。但是,在这个过程中,编辑要考虑到重新编排的原声的自然度。因此,编辑必须同时考虑歌曲的时机、高潮和重新编排的原声的自然性,决定如何替换歌曲中的音乐小节。在这种情况下,编辑需要通过聆听重新排列的结果来优化配乐,并检查结果与视频剪辑之间的自然度和同步度。然而,这种重复的工作非常耗时。[Feng等。2010]提出了一种自动配乐添加方法。然而,由于这种方法是自动添加配乐的数据驱动方法,这种方法不能考虑用户喜欢的时机和高潮。
{"title":"A music video authoring system synchronizing climax of video clips and music via rearrangement of musical bars","authors":"Haruki Sato, T. Hirai, Tomoyasu Nakano, Masataka Goto, S. Morishima","doi":"10.1145/2787626.2792608","DOIUrl":"https://doi.org/10.1145/2787626.2792608","url":null,"abstract":"This paper presents a system that can automatically add a soundtrack to a video clip by replacing and concatenating an existing song's musical bars considering a user's preference. Since a soundtrack makes a video clip attractive, adding a soundtrack to a clip is one of the most important processes in video editing. To make a video clip more attractive, an editor of the clip tends to add a soundtrack considering its timing and climax. For example, editors often add chorus sections to the climax of the clip by replacing and concatenating musical bars in an existing song. However, in the process, editors should take naturalness of rearranged soundtrack into account. Therefore, editors have to decide how to replace musical bars in a song considering its timing, climax, and naturalness of rearranged soundtrack simultaneously. In this case, editors are required to optimize the soundtrack by listening to the rearranged result as well as checking the naturalness and synchronization between the result and the video clip. However, this repetitious work is time-consuming. [Feng et al. 2010] proposed an automatic soundtrack addition method. However, since this method automatically adds soundtrack with data-driven approach, this method cannot consider timing and climax which a user prefers.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127764543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Mobile collaborative augmented reality with real-time AR/VR switching 移动协作增强现实与实时AR/VR切换
Pub Date : 2015-07-31 DOI: 10.1145/2787626.2792662
Prashanth Bollam, Eesha Gothwal, G. B. C. S. T. Vinnakota, Shailesh Kumar, Soumyajit Deb
The recent boom in computing capabilities of mobile devices has led to the introduction of Virtual Reality into the mobile ecosystem. We demonstrate a framework for the Samsung Gear VR headset that allows developers to create a totally immersive AR & VR experience with no need for interfacing with external devices or cables thereby making it a truly autonomous mobile VR experience. The significant benefits of this system over existing ones are - a fully hands free experience where hands could be used for gesture based input, the ability to use the Head Mounted Display (HMD) sensor for improved head and positional tracking and automatic peer to peer network creation for communication between phones. The most important factor in our system is to provide an intuitive way to interact with virtual objects in AR and VR. And users should be able to switch from AR to VR world and vice versa seamlessly.
最近移动设备计算能力的蓬勃发展导致了虚拟现实技术进入移动生态系统。我们为三星Gear VR头显展示了一个框架,允许开发人员创建一个完全身临其境的AR和VR体验,而不需要与外部设备或电缆接口,从而使其成为一个真正自主的移动VR体验。与现有系统相比,该系统的显著优势在于:完全解放双手的体验,双手可以用于基于手势的输入;能够使用头戴式显示器(HMD)传感器来改进头部和位置跟踪;以及在手机上自动创建点对点网络进行通信。我们的系统中最重要的因素是提供一种直观的方式来与AR和VR中的虚拟对象进行交互。用户应该能够从AR世界无缝切换到VR世界,反之亦然。
{"title":"Mobile collaborative augmented reality with real-time AR/VR switching","authors":"Prashanth Bollam, Eesha Gothwal, G. B. C. S. T. Vinnakota, Shailesh Kumar, Soumyajit Deb","doi":"10.1145/2787626.2792662","DOIUrl":"https://doi.org/10.1145/2787626.2792662","url":null,"abstract":"The recent boom in computing capabilities of mobile devices has led to the introduction of Virtual Reality into the mobile ecosystem. We demonstrate a framework for the Samsung Gear VR headset that allows developers to create a totally immersive AR & VR experience with no need for interfacing with external devices or cables thereby making it a truly autonomous mobile VR experience. The significant benefits of this system over existing ones are - a fully hands free experience where hands could be used for gesture based input, the ability to use the Head Mounted Display (HMD) sensor for improved head and positional tracking and automatic peer to peer network creation for communication between phones. The most important factor in our system is to provide an intuitive way to interact with virtual objects in AR and VR. And users should be able to switch from AR to VR world and vice versa seamlessly.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134318494","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
V3: an interactive real-time visualization of vocal vibrations V3:声音振动的交互式实时可视化
Pub Date : 2015-07-31 DOI: 10.1145/2787626.2792624
Rébecca Kleinberger
Our voice is an important part of our individuality but the relationship we have with our own voice is not obvious. We don't hear it the same way others do, and our brain treats it differently from any other sound we hear [Houde et al. 2002]. Yet its sonority is highly linked to our body and mind, and deeply connected with how we are perceived by society and how we see ourselves. The V3 system (Vocal Vibrations Visualization) offers a interactive visualization of vocal vibration patterns. We developed the hexauscultation mask, a head set sensor that measures bioacoustic signals from the voice at 6 points of the face and throat. Those signals are sent and processed to offer a real-time visualization of the relative vibration intensities at the 6 measured points. This system can be used in various situations such as vocal training, tool design for the deaf community, design of HCI for speech disorder treatment and prosody acquisition but also simply for personal vocal exploration.
我们的声音是我们个性的重要组成部分,但我们与自己的声音的关系并不明显。我们听到的声音和其他人听到的不一样,我们的大脑对待它的方式和我们听到的任何其他声音都不一样[Houde et al. 2002]。然而,它的声音与我们的身心密切相关,与社会如何看待我们以及我们如何看待自己密切相关。V3系统(声音振动可视化)提供了声音振动模式的交互式可视化。我们开发了六听诊面罩,一种头戴式传感器,可以测量面部和喉咙的6个点的声音生物声学信号。这些信号被发送和处理,以提供6个测点的相对振动强度的实时可视化。该系统可用于各种场合,如声乐训练,聋人社区的工具设计,语言障碍治疗和韵律习得的HCI设计,也可用于个人声乐探索。
{"title":"V3: an interactive real-time visualization of vocal vibrations","authors":"Rébecca Kleinberger","doi":"10.1145/2787626.2792624","DOIUrl":"https://doi.org/10.1145/2787626.2792624","url":null,"abstract":"Our voice is an important part of our individuality but the relationship we have with our own voice is not obvious. We don't hear it the same way others do, and our brain treats it differently from any other sound we hear [Houde et al. 2002]. Yet its sonority is highly linked to our body and mind, and deeply connected with how we are perceived by society and how we see ourselves. The V3 system (Vocal Vibrations Visualization) offers a interactive visualization of vocal vibration patterns. We developed the hexauscultation mask, a head set sensor that measures bioacoustic signals from the voice at 6 points of the face and throat. Those signals are sent and processed to offer a real-time visualization of the relative vibration intensities at the 6 measured points. This system can be used in various situations such as vocal training, tool design for the deaf community, design of HCI for speech disorder treatment and prosody acquisition but also simply for personal vocal exploration.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134063215","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Jigsaw: multi-modal big data management in digital film production 拼图:数字电影制作中的多模态大数据管理
Pub Date : 2015-07-31 DOI: 10.1145/2787626.2792617
S. Pabst, Hansung Kim, L. Polok, V. Ila, Ted Waine, A. Hilton, J. Clifford
Modern digital film production uses large quantities of data captured on-set, such as videos, digital photographs, LIDAR scans, spherical photography and many other sources to create the final film frames. The processing and management of this massive amount of heterogeneous data consumes enormous resources. We propose an integrated pipeline for 2D/3D data registration aimed at film production, based around the prototype application Jigsaw. It allows users to efficiently manage and process various data types from digital photographs to 3D point clouds. A key step in the use of multi-modal 2D/3D data for content production is the registration into a common coordinate frame (match moving). 3D geometric information is reconstructed from 2D data and registered to the reference 3D models using 3D feature matching [Kim and Hilton 2014]. We present several highly efficient and robust approaches to this problem. Additionally, we have developed and integrated a fast algorithm for incremental marginal covariance calculation [Ila et al. 2015]. This allows us to estimate and visualize the 3D reconstruction error directly on-set, where insufficient coverage or other problems can be addressed right away. We describe the fast hybrid multi-core and GPU accelerated techniques that let us run these algorithms on a laptop. Jigsaw has been used and evaluated in several major digital film productions and significantly reduced the time and work required to manage and process on-set data.
现代数字电影制作使用在现场捕获的大量数据,如视频、数字照片、激光雷达扫描、球形摄影和许多其他来源来创建最终的电影框架。处理和管理这些海量的异构数据需要消耗大量的资源。我们提出了一个针对电影制作的2D/3D数据注册的集成管道,基于原型应用程序Jigsaw。它允许用户有效地管理和处理从数字照片到3D点云的各种数据类型。在内容制作中使用多模态2D/3D数据的关键步骤是注册到一个公共坐标框架(匹配移动)。3D几何信息从2D数据重建,并使用3D特征匹配注册到参考3D模型[Kim and Hilton 2014]。我们提出了几个高效和健壮的方法来解决这个问题。此外,我们开发并集成了一种用于增量边际协方差计算的快速算法[Ila et al. 2015]。这使我们能够直接在现场估计和可视化3D重建误差,在覆盖不足或其他问题可以立即解决的地方。我们描述了快速混合多核和GPU加速技术,使我们能够在笔记本电脑上运行这些算法。Jigsaw已经在几个主要的数字电影制作中使用和评估,并显着减少了管理和处理现场数据所需的时间和工作。
{"title":"Jigsaw: multi-modal big data management in digital film production","authors":"S. Pabst, Hansung Kim, L. Polok, V. Ila, Ted Waine, A. Hilton, J. Clifford","doi":"10.1145/2787626.2792617","DOIUrl":"https://doi.org/10.1145/2787626.2792617","url":null,"abstract":"Modern digital film production uses large quantities of data captured on-set, such as videos, digital photographs, LIDAR scans, spherical photography and many other sources to create the final film frames. The processing and management of this massive amount of heterogeneous data consumes enormous resources. We propose an integrated pipeline for 2D/3D data registration aimed at film production, based around the prototype application Jigsaw. It allows users to efficiently manage and process various data types from digital photographs to 3D point clouds. A key step in the use of multi-modal 2D/3D data for content production is the registration into a common coordinate frame (match moving). 3D geometric information is reconstructed from 2D data and registered to the reference 3D models using 3D feature matching [Kim and Hilton 2014]. We present several highly efficient and robust approaches to this problem. Additionally, we have developed and integrated a fast algorithm for incremental marginal covariance calculation [Ila et al. 2015]. This allows us to estimate and visualize the 3D reconstruction error directly on-set, where insufficient coverage or other problems can be addressed right away. We describe the fast hybrid multi-core and GPU accelerated techniques that let us run these algorithms on a laptop. Jigsaw has been used and evaluated in several major digital film productions and significantly reduced the time and work required to manage and process on-set data.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115539387","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Automatic synthesis of eye and head animation according to duration and point of gaze 根据持续时间和注视点自动合成眼睛和头部动画
Pub Date : 2015-07-31 DOI: 10.1145/2787626.2792607
Hiroki Kagiyama, Masahide Kawai, Daiki Kuwahara, Takuya Kato, S. Morishima
In movie and video game productions, synthesizing subtle eye and corresponding head movements of CG character is essential to make a content dramatic and impressive. However, to complete them costs a lot of time and labors because they often have to be made by manual operations of skilled artists.
在电影和电子游戏制作中,合成CG角色微妙的眼睛和相应的头部动作是使内容富有戏剧性和令人印象深刻的必要条件。然而,要完成它们需要花费大量的时间和劳动力,因为它们通常必须由熟练的艺术家手工操作。
{"title":"Automatic synthesis of eye and head animation according to duration and point of gaze","authors":"Hiroki Kagiyama, Masahide Kawai, Daiki Kuwahara, Takuya Kato, S. Morishima","doi":"10.1145/2787626.2792607","DOIUrl":"https://doi.org/10.1145/2787626.2792607","url":null,"abstract":"In movie and video game productions, synthesizing subtle eye and corresponding head movements of CG character is essential to make a content dramatic and impressive. However, to complete them costs a lot of time and labors because they often have to be made by manual operations of skilled artists.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":"93 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129942236","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fractured 3D object restoration and completion 断裂的3D物体恢复和完成
Pub Date : 2015-07-31 DOI: 10.1145/2787626.2792633
Anthousis Andreadis, Robert Gregor, I. Sipiran, P. Mavridis, Georgios Papaioannou, T. Schreck
The problem of object restoration from eroded fragments where large parts could be missing is of high relevance in archaeology. Manual restoration is possible and common in practice but it is a tedious and error-prone process, which does not scale well. Solutions for specific parts of the problem have been proposed but a complete reassembly and repair pipeline is absent from the bibliography. We propose a shape restoration pipeline consisting of appropriate methods for automatic fragment reassembly and shape completion. We demonstrate the effectiveness of our approach using real-world fractured objects.
从可能丢失大部分的侵蚀碎片中修复物体的问题在考古学中具有很高的相关性。在实践中,手动恢复是可能的,也是常见的,但它是一个冗长且容易出错的过程,而且不能很好地扩展。已经提出了解决问题的具体部分,但在参考书目中没有完整的重组和维修管道。我们提出了一个形状恢复管道,包括适当的方法来自动碎片重组和形状补全。我们用真实世界的断裂物体证明了我们方法的有效性。
{"title":"Fractured 3D object restoration and completion","authors":"Anthousis Andreadis, Robert Gregor, I. Sipiran, P. Mavridis, Georgios Papaioannou, T. Schreck","doi":"10.1145/2787626.2792633","DOIUrl":"https://doi.org/10.1145/2787626.2792633","url":null,"abstract":"The problem of object restoration from eroded fragments where large parts could be missing is of high relevance in archaeology. Manual restoration is possible and common in practice but it is a tedious and error-prone process, which does not scale well. Solutions for specific parts of the problem have been proposed but a complete reassembly and repair pipeline is absent from the bibliography. We propose a shape restoration pipeline consisting of appropriate methods for automatic fragment reassembly and shape completion. We demonstrate the effectiveness of our approach using real-world fractured objects.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128488664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Display of diamond dispersion using wavelength-division rendering and integral photography 用波分渲染和积分摄影显示钻石色散
Pub Date : 2015-07-31 DOI: 10.1145/2787626.2792642
Nahomi Maki, K. Yanaka
Various colors, such as in a prism, are observed in properly cut diamond even under white light because of dispersion. Properly-cut diamond brings about scintillation when viewing angle is changed, because total reflection inside a diamond tends to occur frequently due to the large refractive index. Moreover, strong rainbow colors are seen because of high dispersion ratio.
即使在白光下,由于色散,在适当切割的钻石上也能观察到各种颜色,例如在棱镜中。当观察角度改变时,切割得当的金刚石会产生闪烁,这是由于金刚石的折射率大,其内部容易发生全反射。此外,由于色散比高,可以看到强烈的彩虹色。
{"title":"Display of diamond dispersion using wavelength-division rendering and integral photography","authors":"Nahomi Maki, K. Yanaka","doi":"10.1145/2787626.2792642","DOIUrl":"https://doi.org/10.1145/2787626.2792642","url":null,"abstract":"Various colors, such as in a prism, are observed in properly cut diamond even under white light because of dispersion. Properly-cut diamond brings about scintillation when viewing angle is changed, because total reflection inside a diamond tends to occur frequently due to the large refractive index. Moreover, strong rainbow colors are seen because of high dispersion ratio.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":"237 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116330968","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Shadow shooter: 360-degree all-around virtual 3d interactive content 暗影射击:360度全方位虚拟3d互动内容
Pub Date : 2015-07-31 DOI: 10.1145/2787626.2787637
Masasuke Yasumoto, Takehiro Teraoka
"Shadow Shooter" is a VR shooter game that uses the "e-Yumi 3D" bow interface and real physical interactive content that changes a 360-degree all-around view in a room into virtual game space (Figure 1). This system was constructed by developing our previous interactive "Light Shooter" content based on "The Electric Bow Interface" [Yasumoto and Ohta 2013]. Shadow Shooter expands the virtual game space to all the walls in a room just as in Jones' "Room Alive" [Jones et al. 2014]; however, it does not require large-scale equipment such as multiple projectors. It only requires the e-Yumi 3D device that consists of a real bow's components added to Willis's interface with a mobile projector [Willis et al. 2013]. Thus, we constructed a unique device for Shadow Shooter that easily changes the 360-degree all-around view into a virtual game space.
《Shadow Shooter》是一款VR射击游戏,使用“e-Yumi 3D”弓界面和真实的物理互动内容,将房间内360度的全景变为虚拟的游戏空间(图1)。该系统是在我们之前的《the Electric bow interface》(Yasumoto and Ohta 2013)的基础上开发出的互动“Light Shooter”内容构建的。《Shadow Shooter》将虚拟游戏空间扩展到房间的所有墙壁,就像Jones的《room Alive》一样;然而,它不需要大型设备,如多个投影仪。它只需要e-Yumi 3D设备,该设备由一个真正的弓的组件添加到威利斯与移动投影仪的接口上[威利斯等人,2013]。因此,我们为《Shadow Shooter》构建了一个独特的设备,可以轻松地将360度全方位视角转变为虚拟游戏空间。
{"title":"Shadow shooter: 360-degree all-around virtual 3d interactive content","authors":"Masasuke Yasumoto, Takehiro Teraoka","doi":"10.1145/2787626.2787637","DOIUrl":"https://doi.org/10.1145/2787626.2787637","url":null,"abstract":"\"Shadow Shooter\" is a VR shooter game that uses the \"e-Yumi 3D\" bow interface and real physical interactive content that changes a 360-degree all-around view in a room into virtual game space (Figure 1). This system was constructed by developing our previous interactive \"Light Shooter\" content based on \"The Electric Bow Interface\" [Yasumoto and Ohta 2013]. Shadow Shooter expands the virtual game space to all the walls in a room just as in Jones' \"Room Alive\" [Jones et al. 2014]; however, it does not require large-scale equipment such as multiple projectors. It only requires the e-Yumi 3D device that consists of a real bow's components added to Willis's interface with a mobile projector [Willis et al. 2013]. Thus, we constructed a unique device for Shadow Shooter that easily changes the 360-degree all-around view into a virtual game space.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125987851","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
AffectiveWear: toward recognizing facial expression 情感穿戴:朝向识别面部表情
Pub Date : 2015-07-31 DOI: 10.1145/2787626.2792632
Katsutoshi Masai, Yuta Sugiura, Masa Ogata, Katsuhiro Suzuki, Fumihiko Nakamura, S. Shimamura, K. Kunze, M. Inami, M. Sugimoto
Facial expression is a powerful way for us to exchange information nonverbally. They can give us insights into how people feel and think. There are a number of works related to facial expression detection in computer vision. However, most works focus on camera-based systems installed in the environment. With this method, it is difficult to track user's face if user moves constantly. Moreover, user's facial expression can be recognized at only a limited place.
面部表情是非语言交流信息的有力方式。它们可以让我们洞悉人们的感受和想法。在计算机视觉中有许多与面部表情检测相关的工作。然而,大多数工作都集中在安装在环境中的基于摄像头的系统上。这种方法在用户不断移动的情况下,很难跟踪到用户的面部。此外,用户的面部表情只能在有限的地方被识别。
{"title":"AffectiveWear: toward recognizing facial expression","authors":"Katsutoshi Masai, Yuta Sugiura, Masa Ogata, Katsuhiro Suzuki, Fumihiko Nakamura, S. Shimamura, K. Kunze, M. Inami, M. Sugimoto","doi":"10.1145/2787626.2792632","DOIUrl":"https://doi.org/10.1145/2787626.2792632","url":null,"abstract":"Facial expression is a powerful way for us to exchange information nonverbally. They can give us insights into how people feel and think. There are a number of works related to facial expression detection in computer vision. However, most works focus on camera-based systems installed in the environment. With this method, it is difficult to track user's face if user moves constantly. Moreover, user's facial expression can be recognized at only a limited place.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":"7 5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128729638","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Fully automatic ID mattes with support for motion blur and transparency 全自动ID磨砂与支持运动模糊和透明度
Pub Date : 2015-07-31 DOI: 10.1145/2787626.2787629
J. Friedman, Andrew C. Jones
In 3D production for commercials, television, and film, ID mattes are commonly used to modify rendered images without re-rendering. ID mattes are bitmap images used to isolate specific objects, or multiple objects, such as all of the buttons on a shirt. Many 3D pipelines are built to provide compositors with ID mattes in addition to beauty renders to allow flexibility.
在商业广告、电视和电影的3D制作中,ID磨砂通常用于修改已渲染的图像而无需重新渲染。ID磨砂是用于隔离特定对象或多个对象的位图图像,例如衬衫上的所有纽扣。许多3D管道的建立是为了向合成器提供ID磨砂,除了美丽的渲染,以允许灵活性。
{"title":"Fully automatic ID mattes with support for motion blur and transparency","authors":"J. Friedman, Andrew C. Jones","doi":"10.1145/2787626.2787629","DOIUrl":"https://doi.org/10.1145/2787626.2787629","url":null,"abstract":"In 3D production for commercials, television, and film, ID mattes are commonly used to modify rendered images without re-rendering. ID mattes are bitmap images used to isolate specific objects, or multiple objects, such as all of the buttons on a shirt. Many 3D pipelines are built to provide compositors with ID mattes in addition to beauty renders to allow flexibility.","PeriodicalId":269034,"journal":{"name":"ACM SIGGRAPH 2015 Posters","volume":"94 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116837353","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
期刊
ACM SIGGRAPH 2015 Posters
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1