{"title":"NeRFahedron: A Primitive for Animatable Neural Rendering with Interactive Speed","authors":"Zackary P. T. Sin, P. H. F. Ng, H. Leong","doi":"10.1145/3585512","DOIUrl":"https://doi.org/10.1145/3585512","url":null,"abstract":"pipeline","PeriodicalId":74536,"journal":{"name":"Proceedings of the ACM on computer graphics and interactive techniques","volume":"6 1","pages":"2:1-2:20"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64067992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Marc Habermann, Lingjie Liu, Weipeng Xu, Gerard Pons-Moll, Michael Zollhoefer, C. Theobalt
Photo-real digital human avatars are of enormous importance in graphics, as they enable immersive communication over the globe, improve gaming and entertainment experiences, and can be particularly beneficial for AR and VR settings. However, current avatar generation approaches either fall short in high-fidelity novel view synthesis, generalization to novel motions, reproduction of loose clothing, or they cannot render characters at the high resolution offered by modern displays. To this end, we propose HDHumans, which is the first method for HD human character synthesis that jointly produces an accurate and temporally coherent 3D deforming surface and highly photo-realistic images of arbitrary novel views and of motions not seen at training time. At the technical core, our method tightly integrates a classical deforming character template with neural radiance fields (NeRF). Our method is carefully designed to achieve a synergy between classical surface deformation and a NeRF. First, the template guides the NeRF, which allows synthesizing novel views of a highly dynamic and articulated character and even enables the synthesis of novel motions. Second, we also leverage the dense pointclouds resulting from the NeRF to further improve the deforming surface via 3D-to-3D supervision. We outperform the state of the art quantitatively and qualitatively in terms of synthesis quality and resolution, as well as the quality of 3D surface reconstruction.
{"title":"HDHumans","authors":"Marc Habermann, Lingjie Liu, Weipeng Xu, Gerard Pons-Moll, Michael Zollhoefer, C. Theobalt","doi":"10.1145/3606927","DOIUrl":"https://doi.org/10.1145/3606927","url":null,"abstract":"Photo-real digital human avatars are of enormous importance in graphics, as they enable immersive communication over the globe, improve gaming and entertainment experiences, and can be particularly beneficial for AR and VR settings. However, current avatar generation approaches either fall short in high-fidelity novel view synthesis, generalization to novel motions, reproduction of loose clothing, or they cannot render characters at the high resolution offered by modern displays. To this end, we propose HDHumans, which is the first method for HD human character synthesis that jointly produces an accurate and temporally coherent 3D deforming surface and highly photo-realistic images of arbitrary novel views and of motions not seen at training time. At the technical core, our method tightly integrates a classical deforming character template with neural radiance fields (NeRF). Our method is carefully designed to achieve a synergy between classical surface deformation and a NeRF. First, the template guides the NeRF, which allows synthesizing novel views of a highly dynamic and articulated character and even enables the synthesis of novel motions. Second, we also leverage the dense pointclouds resulting from the NeRF to further improve the deforming surface via 3D-to-3D supervision. We outperform the state of the art quantitatively and qualitatively in terms of synthesis quality and resolution, as well as the quality of 3D surface reconstruction.","PeriodicalId":74536,"journal":{"name":"Proceedings of the ACM on computer graphics and interactive techniques","volume":"6 1","pages":"1 - 23"},"PeriodicalIF":0.0,"publicationDate":"2022-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48119283","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recent advances in Ground Penetrating Radar have caused the imaging technology to pivot from a simple construction engineering tool to a valuable new option for archaeology. Newfound abilities to model sound echoes resonating through stone have revealed archeological discoveries where excavation is not possible. Working with a transdisciplinary team, the artist secured a GPR scan of 2,000-year-old Gallo-Roman temple ruins below the plaza of a gothic cathedral in France. The technology's sounding image of the hidden site became a visual language that was explored in a 2-year series of artworks based on the discovery. The art + science research project resulted in data visualizations across many creative media including site-specific public trompe l'oeil, augmented reality, and hundreds of design experiments. Using the GPR dataset as a foundational resource in art-making, the project expanded the interpretation of Digital Heritage. Collectively, the works reinforced the understanding of a site hidden since Antiquity but also considered public non-sites in pandemic times. The advances in this scanning technology proved to be a powerful creative tool to highlight themes of how we protect and understand heritage, how we create public experiences in socially distanced times, and our responsibility to continually reconsider complex history.
{"title":"Below Victory","authors":"S. Hessels","doi":"10.1145/3533389","DOIUrl":"https://doi.org/10.1145/3533389","url":null,"abstract":"Recent advances in Ground Penetrating Radar have caused the imaging technology to pivot from a simple construction engineering tool to a valuable new option for archaeology. Newfound abilities to model sound echoes resonating through stone have revealed archeological discoveries where excavation is not possible. Working with a transdisciplinary team, the artist secured a GPR scan of 2,000-year-old Gallo-Roman temple ruins below the plaza of a gothic cathedral in France. The technology's sounding image of the hidden site became a visual language that was explored in a 2-year series of artworks based on the discovery. The art + science research project resulted in data visualizations across many creative media including site-specific public trompe l'oeil, augmented reality, and hundreds of design experiments. Using the GPR dataset as a foundational resource in art-making, the project expanded the interpretation of Digital Heritage. Collectively, the works reinforced the understanding of a site hidden since Antiquity but also considered public non-sites in pandemic times. The advances in this scanning technology proved to be a powerful creative tool to highlight themes of how we protect and understand heritage, how we create public experiences in socially distanced times, and our responsibility to continually reconsider complex history.","PeriodicalId":74536,"journal":{"name":"Proceedings of the ACM on computer graphics and interactive techniques","volume":" ","pages":"1 - 10"},"PeriodicalIF":0.0,"publicationDate":"2022-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48137596","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This project creates a visual-mental-physical circuit between a Generative Adversarial Network (GAN), a co-robotic arm, and a five-year-old child. From training images to the latent space of a GAN, through pen on paper to a live human collaborator, it establishes a series of translational stages between humans and non-humans played out through the medium of drawing. Trained on a subset of the Rhoda Kellogg Child Art Collection, the neural network at the center of this piece learns its own representations of these images. The generated results, synthetic children's drawings, are of interest both for being outside of adult conventions and learned expression-like Dubuffet's art brut or Surrealist automatism-and for how they align machine learning with the human act of learning to draw. The project layers many kinds of agency and embodiment: from the thousands of anonymous children who produced the original artwork used as training data, through the co-robot drawing from GAN-generated imagery, to the human child's active perception and graphic response to the robot. These questions of where we search for the other; when we attribute autonomy and intelligence; and why we might wish to escape our human subjectivities speak to core issues in the design and use of AI systems. This project is one attempt to think through those questions in an embodied way.
{"title":"Three Stage Drawing Transfer","authors":"R. Twomey","doi":"10.1145/3533614","DOIUrl":"https://doi.org/10.1145/3533614","url":null,"abstract":"This project creates a visual-mental-physical circuit between a Generative Adversarial Network (GAN), a co-robotic arm, and a five-year-old child. From training images to the latent space of a GAN, through pen on paper to a live human collaborator, it establishes a series of translational stages between humans and non-humans played out through the medium of drawing. Trained on a subset of the Rhoda Kellogg Child Art Collection, the neural network at the center of this piece learns its own representations of these images. The generated results, synthetic children's drawings, are of interest both for being outside of adult conventions and learned expression-like Dubuffet's art brut or Surrealist automatism-and for how they align machine learning with the human act of learning to draw. The project layers many kinds of agency and embodiment: from the thousands of anonymous children who produced the original artwork used as training data, through the co-robot drawing from GAN-generated imagery, to the human child's active perception and graphic response to the robot. These questions of where we search for the other; when we attribute autonomy and intelligence; and why we might wish to escape our human subjectivities speak to core issues in the design and use of AI systems. This project is one attempt to think through those questions in an embodied way.","PeriodicalId":74536,"journal":{"name":"Proceedings of the ACM on computer graphics and interactive techniques","volume":"5 1","pages":"1 - 7"},"PeriodicalIF":0.0,"publicationDate":"2022-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46367492","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pedro Silva, Daniel Lopes, Pedro Martins, Penousal Machado
Field of Leaves is an interactive installation that depicts public contracts held in Portugal and their distribution over its five regions and two autonomous regions. The installation presents information about the public contracts available at the online portal of public procurements called Portal Basegov. The installation attempts to advocate the importance and advantages of aesthetics in first-time audience engagement, and how user interaction and hedonic qualities can heighten the user's curiosity and promote more lasting explorations with a visualization.
{"title":"Field of Leaves","authors":"Pedro Silva, Daniel Lopes, Pedro Martins, Penousal Machado","doi":"10.1145/3533683","DOIUrl":"https://doi.org/10.1145/3533683","url":null,"abstract":"Field of Leaves is an interactive installation that depicts public contracts held in Portugal and their distribution over its five regions and two autonomous regions. The installation presents information about the public contracts available at the online portal of public procurements called Portal Basegov. The installation attempts to advocate the importance and advantages of aesthetics in first-time audience engagement, and how user interaction and hedonic qualities can heighten the user's curiosity and promote more lasting explorations with a visualization.","PeriodicalId":74536,"journal":{"name":"Proceedings of the ACM on computer graphics and interactive techniques","volume":" ","pages":"1 - 11"},"PeriodicalIF":0.0,"publicationDate":"2022-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46554764","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ali Hossaini, Oliver M. Gingrich, Shama Rahman, M. Grierson, Joshua Murr, A. Chamberlain, Alain Renaud
Live performers often describe "playing to the audience" as shifts in emphasis, timing and even content according to perceived audience reactions. Traditional staging allows the transmission of physiological signals through the audience's eyes, skin, odor, breathing, vocalizations and motions such as dancing, stamping and clapping, some of which are audible. The Internet and other mass media broaden access to live performance, but they efface traditional channels for "liveness," which we specify as physiological feedback loops that bind performers and audience through shared agency. During online events, contemporary performers enjoy text and icon-based feedback, but current technology limits expression of physiological reactions by remote audiences. Looking to a future Internet of Neurons where humans and AI co-create via neurophysiological interfaces, this paper examines the possibility of reestablishing audience agency during live performance by using hemodynamic sensors while exploring the potential of AI as a creative collaborator.
{"title":"GROUPTHINK","authors":"Ali Hossaini, Oliver M. Gingrich, Shama Rahman, M. Grierson, Joshua Murr, A. Chamberlain, Alain Renaud","doi":"10.1145/3533610","DOIUrl":"https://doi.org/10.1145/3533610","url":null,"abstract":"Live performers often describe \"playing to the audience\" as shifts in emphasis, timing and even content according to perceived audience reactions. Traditional staging allows the transmission of physiological signals through the audience's eyes, skin, odor, breathing, vocalizations and motions such as dancing, stamping and clapping, some of which are audible. The Internet and other mass media broaden access to live performance, but they efface traditional channels for \"liveness,\" which we specify as physiological feedback loops that bind performers and audience through shared agency. During online events, contemporary performers enjoy text and icon-based feedback, but current technology limits expression of physiological reactions by remote audiences. Looking to a future Internet of Neurons where humans and AI co-create via neurophysiological interfaces, this paper examines the possibility of reestablishing audience agency during live performance by using hemodynamic sensors while exploring the potential of AI as a creative collaborator.","PeriodicalId":74536,"journal":{"name":"Proceedings of the ACM on computer graphics and interactive techniques","volume":"5 1","pages":"1 - 10"},"PeriodicalIF":0.0,"publicationDate":"2022-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64052188","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper proposes a new perspective on the role of computers in the re-materialization of ancient artifacts, highlighting issues of conservation, self-expression, and authorship in creative processes. Specifically, our approach allows all of the creative spirits (i.e., creative agencies) taking part in the making process---from the ancient makers, to the digital craftsperson, to the making machine itself---to be represented in the final outcome. The paper explores the evolution of our technique through three projects that rely on both digital and traditional making practices. We introduce the notion of a digital spirit, which allows for a holistic and respectful integration of diverse making agencies in a unified hybrid practice.
{"title":"The Ghost in The Machine","authors":"S. Elran, Amit R. Zoran","doi":"10.1145/3533609","DOIUrl":"https://doi.org/10.1145/3533609","url":null,"abstract":"This paper proposes a new perspective on the role of computers in the re-materialization of ancient artifacts, highlighting issues of conservation, self-expression, and authorship in creative processes. Specifically, our approach allows all of the creative spirits (i.e., creative agencies) taking part in the making process---from the ancient makers, to the digital craftsperson, to the making machine itself---to be represented in the final outcome. The paper explores the evolution of our technique through three projects that rely on both digital and traditional making practices. We introduce the notion of a digital spirit, which allows for a holistic and respectful integration of diverse making agencies in a unified hybrid practice.","PeriodicalId":74536,"journal":{"name":"Proceedings of the ACM on computer graphics and interactive techniques","volume":" ","pages":"1 - 8"},"PeriodicalIF":0.0,"publicationDate":"2022-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47479165","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper describes an interactive robotic art installation Dream Painter by the artistic duo Varvara & Mar that deploys artificial intelligence (AI), a KUKA industrial robot and interaction technology in order to offer the audience an artistic interpretation of their past dreams, which are then turned into a collective painting. The installation is composed of four larger parts: audience interaction design, AI-driven multicoloured drawing software, communication with an arm robot, and a kinetic part that is the automatic paper progression following each completed dream drawing. All these interconnected parts are orchestrated into an interactive and autonomous system in the form of an art installation that occupies two floors of a cultural centre. In the article, we document the technical and conceptual frameworks of the project, and the experience gained through the creation and exhibition of the interactive robotic art installation. In addition, the paper explores the creative potential of speech-to-AI-drawing transformation, which is a translation of different semiotic spaces performed by a robot as a method for audience interaction in the art exhibition context.
{"title":"Dream Painter","authors":"M. Sola, Varvara Guljajeva","doi":"10.1145/3533386","DOIUrl":"https://doi.org/10.1145/3533386","url":null,"abstract":"This paper describes an interactive robotic art installation Dream Painter by the artistic duo Varvara & Mar that deploys artificial intelligence (AI), a KUKA industrial robot and interaction technology in order to offer the audience an artistic interpretation of their past dreams, which are then turned into a collective painting. The installation is composed of four larger parts: audience interaction design, AI-driven multicoloured drawing software, communication with an arm robot, and a kinetic part that is the automatic paper progression following each completed dream drawing. All these interconnected parts are orchestrated into an interactive and autonomous system in the form of an art installation that occupies two floors of a cultural centre. In the article, we document the technical and conceptual frameworks of the project, and the experience gained through the creation and exhibition of the interactive robotic art installation. In addition, the paper explores the creative potential of speech-to-AI-drawing transformation, which is a translation of different semiotic spaces performed by a robot as a method for audience interaction in the art exhibition context.","PeriodicalId":74536,"journal":{"name":"Proceedings of the ACM on computer graphics and interactive techniques","volume":" ","pages":"1 - 11"},"PeriodicalIF":0.0,"publicationDate":"2022-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48290555","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Elizabeth Meiklejohn, Felicita Devlin, J. Dunnigan, Patricia Johnson, Joy Xiaoji Zhang, Steve Marschner, B. Hagan, Joy Ko
The class of self-shaping woven textiles are those that undergo a transformation in shape exhibiting three-dimensional behaviors due to the interplay between weave structure and active yarns that shrink, twist or otherwise move during finishing processes such as steaming. When weaving with active yarns to produce dimensional fabrics the unpredictability of the complex interactions involved typically necessitates arduous physical sampling for intentional design and use. Current weaving software, overwhelmingly reliant on 2D graphic depiction of woven fabric, is wholly unable to provide the predictive dimensional appearance of such fabrics that might lead to practical decision making and innovative design solutions. This paper describes an iterative workflow to design self-shaping woven fabrics, from simulation-assisted drafting to the creation of a library of woven behaviors categorized by attributes for seating design. This workflow is then used to inform the design of a new yarn-based simulator as well as to design and fabricate a textile-centric furniture piece in which these woven fabric behaviors and ornamentation are intentionally zoned to the form according to structural, ergonomic and aesthetic considerations.
{"title":"Woven Behavior and Ornamentation","authors":"Elizabeth Meiklejohn, Felicita Devlin, J. Dunnigan, Patricia Johnson, Joy Xiaoji Zhang, Steve Marschner, B. Hagan, Joy Ko","doi":"10.1145/3533682","DOIUrl":"https://doi.org/10.1145/3533682","url":null,"abstract":"The class of self-shaping woven textiles are those that undergo a transformation in shape exhibiting three-dimensional behaviors due to the interplay between weave structure and active yarns that shrink, twist or otherwise move during finishing processes such as steaming. When weaving with active yarns to produce dimensional fabrics the unpredictability of the complex interactions involved typically necessitates arduous physical sampling for intentional design and use. Current weaving software, overwhelmingly reliant on 2D graphic depiction of woven fabric, is wholly unable to provide the predictive dimensional appearance of such fabrics that might lead to practical decision making and innovative design solutions. This paper describes an iterative workflow to design self-shaping woven fabrics, from simulation-assisted drafting to the creation of a library of woven behaviors categorized by attributes for seating design. This workflow is then used to inform the design of a new yarn-based simulator as well as to design and fabricate a textile-centric furniture piece in which these woven fabric behaviors and ornamentation are intentionally zoned to the form according to structural, ergonomic and aesthetic considerations.","PeriodicalId":74536,"journal":{"name":"Proceedings of the ACM on computer graphics and interactive techniques","volume":"5 1","pages":"1 - 12"},"PeriodicalIF":0.0,"publicationDate":"2022-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43514556","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper focuses on the augmented reality project TouchAR to reveal creative collaborative approaches the authors took to site, technology, ecology, and gesture in their production of interactive public realm artworks in direct response to the Covid19 pandemic. Informed by the uncanny (re-animation, the double) ontology (affect, sensing embodied encounter) and ecology (speculative fabulation, deep time), the project explores 3D scanning and AR technology as tools for transformation and engagement with ecological deep time; addressing complications involved in offering an embodied experience with AR and as a means of enchantment. The authors will discuss how they use technology to suture analogue and computational art making, explore ideas of touch and engagement with ecology in a technological society and address the deep past, present challenges and possible futures.
{"title":"Erratics on the Road to Wigan Pier","authors":"Chara Lewis, K. Mojsiewicz, Anneké Pettican","doi":"10.1145/3533611","DOIUrl":"https://doi.org/10.1145/3533611","url":null,"abstract":"This paper focuses on the augmented reality project TouchAR to reveal creative collaborative approaches the authors took to site, technology, ecology, and gesture in their production of interactive public realm artworks in direct response to the Covid19 pandemic. Informed by the uncanny (re-animation, the double) ontology (affect, sensing embodied encounter) and ecology (speculative fabulation, deep time), the project explores 3D scanning and AR technology as tools for transformation and engagement with ecological deep time; addressing complications involved in offering an embodied experience with AR and as a means of enchantment. The authors will discuss how they use technology to suture analogue and computational art making, explore ideas of touch and engagement with ecology in a technological society and address the deep past, present challenges and possible futures.","PeriodicalId":74536,"journal":{"name":"Proceedings of the ACM on computer graphics and interactive techniques","volume":" ","pages":"1 - 10"},"PeriodicalIF":0.0,"publicationDate":"2022-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46243993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}