Konrad Tollmar, P. Lungaro, A. Valero, Ashutosh Mittal
Smart Eye-tracking Enabled Networking (SEEN) is a novel end-to-end framework using real-time eye-gaze information beyond state-of-the-art solutions. Our approach can effectively combine the computational savings of foveal rendering with the bandwidth savings required to enable future mobile VR content provision.
{"title":"Beyond foveal rendering: smart eye-tracking enabled networking (SEEN)","authors":"Konrad Tollmar, P. Lungaro, A. Valero, Ashutosh Mittal","doi":"10.1145/3084363.3085163","DOIUrl":"https://doi.org/10.1145/3084363.3085163","url":null,"abstract":"Smart Eye-tracking Enabled Networking (SEEN) is a novel end-to-end framework using real-time eye-gaze information beyond state-of-the-art solutions. Our approach can effectively combine the computational savings of foveal rendering with the bandwidth savings required to enable future mobile VR content provision.","PeriodicalId":163368,"journal":{"name":"ACM SIGGRAPH 2017 Talks","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117271205","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
For Disney's Moana, water was a dominant part of island life, in fact it had a life of itfis own. Presenting itself as a character, water was ever present, in a multitude of shapes and scales. An end-to-end water pipeline was developed for this film [Garcia et al. 2016], including the creation of proprietary fluid APIC solver [Jiang et al. 2015] named Splash. This gave us physically accurate simulations. The challenge with performing water was to provide art-directed simulations, defying physics, yet remaining in a grounded sense of possibility. Incorporating natural swells and flows to support the building of designed shapes limited anthropomorphic features, and played to our goal of communicating that this character is the ocean as a whole.
{"title":"Moana: performing water","authors":"Ben Frost, A. Stomakhin, Hiroaki Narita","doi":"10.1145/3084363.3085091","DOIUrl":"https://doi.org/10.1145/3084363.3085091","url":null,"abstract":"For Disney's Moana, water was a dominant part of island life, in fact it had a life of itfis own. Presenting itself as a character, water was ever present, in a multitude of shapes and scales. An end-to-end water pipeline was developed for this film [Garcia et al. 2016], including the creation of proprietary fluid APIC solver [Jiang et al. 2015] named Splash. This gave us physically accurate simulations. The challenge with performing water was to provide art-directed simulations, defying physics, yet remaining in a grounded sense of possibility. Incorporating natural swells and flows to support the building of designed shapes limited anthropomorphic features, and played to our goal of communicating that this character is the ocean as a whole.","PeriodicalId":163368,"journal":{"name":"ACM SIGGRAPH 2017 Talks","volume":"65 6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129313038","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
K. Vanhoey, C. Oliveira, Hayko Riemenschneider, A. Bódis-Szomorú, Santiago Manén, D. Paudel, Michael Gygli, Nikolay Kobyshev, Till Kroeger, Dengxin Dai, L. Gool
VarCity - the Video is a short documentary-style CGI movie explaining the main outcomes of the 5-year Computer Vision research project VarCity. Besides a coarse overview of the research, we present the challenges that were faced in its production, induced by two factors: i) usage of imperfect research data produced by automatic algorithms, and ii) human factors, like federating researchers and a CG artist around a similar goal many had a different conception of, while no one had a detailed overview of all the content. Successive achievement was driven by some ad-hoc technical developments but more importantly of detailed and abundant communication and agreement on common best practices.
{"title":"VarCity - the video: the struggles and triumphs of leveraging fundamental research results in a graphics video production","authors":"K. Vanhoey, C. Oliveira, Hayko Riemenschneider, A. Bódis-Szomorú, Santiago Manén, D. Paudel, Michael Gygli, Nikolay Kobyshev, Till Kroeger, Dengxin Dai, L. Gool","doi":"10.1145/3084363.3085085","DOIUrl":"https://doi.org/10.1145/3084363.3085085","url":null,"abstract":"VarCity - the Video is a short documentary-style CGI movie explaining the main outcomes of the 5-year Computer Vision research project VarCity. Besides a coarse overview of the research, we present the challenges that were faced in its production, induced by two factors: i) usage of imperfect research data produced by automatic algorithms, and ii) human factors, like federating researchers and a CG artist around a similar goal many had a different conception of, while no one had a detailed overview of all the content. Successive achievement was driven by some ad-hoc technical developments but more importantly of detailed and abundant communication and agreement on common best practices.","PeriodicalId":163368,"journal":{"name":"ACM SIGGRAPH 2017 Talks","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132016461","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
W. Choi, Nayoung Kim, Julie Jang, Sang-Hun Kim, Dohyun Yang
Although there is commercially available software for producing digital fur and feathers, creating photorealistic digital creatures under a low budget is still no trivial matter. Because no software could fulfill our purposes at the time of the making of our first movie Mr. Go, we decided to develop our own custom solution, ZelosFur [Choi et al. 2013]. While it made possible furry digital creature creation for subsequent projects, this prototypical system lacked the flexibility to easily add new features with backward compatibility and did not provide artists with enough freedom or control over the grooming process. Zelos Node Network (ZENN) is a new procedural solution that allows for quick, easy, and art-directable creation of all kinds of body coverings for digital creatures (e.g. fur, feathers, scales, etc.) By extension, it can also be used to create forests, rocks, and verdant landscapes for digital environments. In this talk, we discuss how to design and implement a procedural grooming workflow within ZENN and briefly address our caching and rendering process.
虽然有商业上可用的软件来制作数字皮毛和羽毛,但在低预算下创造逼真的数字生物仍然不是一件小事。因为在制作我们的第一部电影《Mr. Go》时,没有软件可以满足我们的目的,我们决定开发我们自己的定制解决方案,ZelosFur [Choi et al. 2013]。虽然它可以为后续项目创造毛茸茸的数字生物,但这种原型系统缺乏灵活性,无法轻松添加向后兼容的新功能,并且无法为美工提供足够的自由或控制修饰过程。Zelos节点网络(ZENN)是一个新的程序解决方案,允许快速,简单,和艺术指导的创建各种身体覆盖的数字生物(如皮毛,羽毛,鳞片等)通过扩展,它也可以用来创建森林,岩石和翠绿的景观的数字环境。在这次演讲中,我们将讨论如何在ZENN中设计和实现一个过程修饰工作流,并简要介绍我们的缓存和渲染过程。
{"title":"Build your own procedural grooming pipeline","authors":"W. Choi, Nayoung Kim, Julie Jang, Sang-Hun Kim, Dohyun Yang","doi":"10.1145/3084363.3085025","DOIUrl":"https://doi.org/10.1145/3084363.3085025","url":null,"abstract":"Although there is commercially available software for producing digital fur and feathers, creating photorealistic digital creatures under a low budget is still no trivial matter. Because no software could fulfill our purposes at the time of the making of our first movie Mr. Go, we decided to develop our own custom solution, ZelosFur [Choi et al. 2013]. While it made possible furry digital creature creation for subsequent projects, this prototypical system lacked the flexibility to easily add new features with backward compatibility and did not provide artists with enough freedom or control over the grooming process. Zelos Node Network (ZENN) is a new procedural solution that allows for quick, easy, and art-directable creation of all kinds of body coverings for digital creatures (e.g. fur, feathers, scales, etc.) By extension, it can also be used to create forests, rocks, and verdant landscapes for digital environments. In this talk, we discuss how to design and implement a procedural grooming workflow within ZENN and briefly address our caching and rendering process.","PeriodicalId":163368,"journal":{"name":"ACM SIGGRAPH 2017 Talks","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134313752","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jens Jebens, D. Gray, Simon H. Bull, Aidan Sarsfield
The demand for asset complexity has increased by several orders of magnitude since The LEGO Movie. This has resulted in the need for the team at Animal Logic to further develop their proprietary render and shading pipeline, while significantly optimising nearly all aspects of asset creation. Animal Logic's already extensive library of LEGO bricks was expanded considerably, and centralised for use across multiple shows and multiple locations. Continued development of asset creation tools, and significant increases in pipeline automation ensured increased review cycles, greater consistency and minimal duplication of effort.
{"title":"Evolving complexity management on \"the LEGO Batman movie\"","authors":"Jens Jebens, D. Gray, Simon H. Bull, Aidan Sarsfield","doi":"10.1145/3084363.3085059","DOIUrl":"https://doi.org/10.1145/3084363.3085059","url":null,"abstract":"The demand for asset complexity has increased by several orders of magnitude since The LEGO Movie. This has resulted in the need for the team at Animal Logic to further develop their proprietary render and shading pipeline, while significantly optimising nearly all aspects of asset creation. Animal Logic's already extensive library of LEGO bricks was expanded considerably, and centralised for use across multiple shows and multiple locations. Continued development of asset creation tools, and significant increases in pipeline automation ensured increased review cycles, greater consistency and minimal duplication of effort.","PeriodicalId":163368,"journal":{"name":"ACM SIGGRAPH 2017 Talks","volume":"273 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122114528","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A new (virtual) reality at the New York Times","authors":"G. Roberts","doi":"10.1145/3084363.3105999","DOIUrl":"https://doi.org/10.1145/3084363.3105999","url":null,"abstract":"","PeriodicalId":163368,"journal":{"name":"ACM SIGGRAPH 2017 Talks","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125397382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose a novel method of compressing a fluid effect for realtime playback by using a compact mathematical representation of the spatio-temporal fluid surface. To create the surface representation we use as input a set of fluid meshes from standard techniques along with the simulation's surface velocity to construct a spatially adaptive and temporally coherent Lagrangian least-squares representation of the surface. We then compress the Lagrangian point data using a technique called Fourier extensions for further compression gains. The resulting surface is easily decompressed and amenable to being evaluated in parallel. We demonstrate real-time and interactive decompression and meshing of surfaces using a dual-contouring method that efficiently uses the decompressed particle data and least-squares representation to create a view dependent triangulation.
{"title":"Compact iso-surface representation and compression for fluid phenomena","authors":"T. Keeler, R. Bridson","doi":"10.1145/3084363.3085080","DOIUrl":"https://doi.org/10.1145/3084363.3085080","url":null,"abstract":"We propose a novel method of compressing a fluid effect for realtime playback by using a compact mathematical representation of the spatio-temporal fluid surface. To create the surface representation we use as input a set of fluid meshes from standard techniques along with the simulation's surface velocity to construct a spatially adaptive and temporally coherent Lagrangian least-squares representation of the surface. We then compress the Lagrangian point data using a technique called Fourier extensions for further compression gains. The resulting surface is easily decompressed and amenable to being evaluated in parallel. We demonstrate real-time and interactive decompression and meshing of surfaces using a dual-contouring method that efficiently uses the decompressed particle data and least-squares representation to create a view dependent triangulation.","PeriodicalId":163368,"journal":{"name":"ACM SIGGRAPH 2017 Talks","volume":"127 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127079161","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D. Maupu, Emanuele Goffredo, N. Hylton, Mungo Pay, M. Prazák
While crowd simulation frameworks can be very powerful for virtual crowd generation, in a VFX context they can also be unwieldy due to their chaotic nature. Small changes on the inputs can produce markedly different results, which can be problematic when attempting to adhere to a director's vision. Artist driven tools allow much more flexibility when constructing scenes, speed up turn-around time and can produce extremely dynamic crowd shots. To generate virtual crowds, Double Negative VFX (Dneg) has recently transitioned from an in-house standalone simulation-based solution to an artist-driven framework integrated into SideFX's Houdini.
{"title":"Artist-driven crowd authoring tools","authors":"D. Maupu, Emanuele Goffredo, N. Hylton, Mungo Pay, M. Prazák","doi":"10.1145/3084363.3085035","DOIUrl":"https://doi.org/10.1145/3084363.3085035","url":null,"abstract":"While crowd simulation frameworks can be very powerful for virtual crowd generation, in a VFX context they can also be unwieldy due to their chaotic nature. Small changes on the inputs can produce markedly different results, which can be problematic when attempting to adhere to a director's vision. Artist driven tools allow much more flexibility when constructing scenes, speed up turn-around time and can produce extremely dynamic crowd shots. To generate virtual crowds, Double Negative VFX (Dneg) has recently transitioned from an in-house standalone simulation-based solution to an artist-driven framework integrated into SideFX's Houdini.","PeriodicalId":163368,"journal":{"name":"ACM SIGGRAPH 2017 Talks","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128092006","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In FINAL FANTASY XV, a triple-A open world RPG, we have proposed a new method of smart gameplay sharing by introducing a novel mechanism of automatic gameplay photograph generation. Unlike the classic screenshots that most players are familiar with, the photographs generated are depicted as though they were seen from the perspective of the in-game AI companion "Prompto". This system enhances the photos with several features such as shot facing, facial-body motion exaggeration, auto triggering, auto framing, auto focusing, auto post-filtering and auto album management. The system is capable of generating photographs that are stylish and unique, yet represent your gameplay in a new way no other games have accomplished before. With an in-game social network posting interface, generated photos can be easily shared. As a result, since the release of the game, our photos are flooding Facebook and Twitter, while creating a new benchmark to the world in the field of smart gameplay sharing.
{"title":"Procedural photograph generation from actual gameplay: snapshot AI in FINAL FANTASY XV","authors":"Prasert Prasertvithyakarn, Tatsuhiro Joudan, Hidekazu Kato, Seiji Nanase, Masayoshi Miyamoto, I. Hasegawa","doi":"10.1145/3084363.3085078","DOIUrl":"https://doi.org/10.1145/3084363.3085078","url":null,"abstract":"In FINAL FANTASY XV, a triple-A open world RPG, we have proposed a new method of smart gameplay sharing by introducing a novel mechanism of automatic gameplay photograph generation. Unlike the classic screenshots that most players are familiar with, the photographs generated are depicted as though they were seen from the perspective of the in-game AI companion \"Prompto\". This system enhances the photos with several features such as shot facing, facial-body motion exaggeration, auto triggering, auto framing, auto focusing, auto post-filtering and auto album management. The system is capable of generating photographs that are stylish and unique, yet represent your gameplay in a new way no other games have accomplished before. With an in-game social network posting interface, generated photos can be easily shared. As a result, since the release of the game, our photos are flooding Facebook and Twitter, while creating a new benchmark to the world in the field of smart gameplay sharing.","PeriodicalId":163368,"journal":{"name":"ACM SIGGRAPH 2017 Talks","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124436887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
G. Mourino, Mason Evans, Kevin Edzenga, Svetla Cavaleri, Mark Adams, J. Bisceglio
With the help of new tools, we streamlined our review and render processes for crowds to triple our shot count on our latest show, Ferdinand. At the same time, we integrated some novel approaches to complex deformation features for cloth and facial animation, which elevated the quality of our crowd animations.
{"title":"Populating the crowds in Ferdinand","authors":"G. Mourino, Mason Evans, Kevin Edzenga, Svetla Cavaleri, Mark Adams, J. Bisceglio","doi":"10.1145/3084363.3085055","DOIUrl":"https://doi.org/10.1145/3084363.3085055","url":null,"abstract":"With the help of new tools, we streamlined our review and render processes for crowds to triple our shot count on our latest show, Ferdinand. At the same time, we integrated some novel approaches to complex deformation features for cloth and facial animation, which elevated the quality of our crowd animations.","PeriodicalId":163368,"journal":{"name":"ACM SIGGRAPH 2017 Talks","volume":"100 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131525032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}