Geometry Nodes is a procedural modeling and animation system that has been part of Blender since 2021, with Blender 2.92. It was initially focused on set dressing, procedural modeling and initial hair grooming. However in Blender 3.6 similation nodes were finally added. The class will be a hands-on show and tell of Geometry Nodes with particular emphasis on the brand new simulation systems.
{"title":"Blender’s Simulation Nodes: A workshop on creating a melting effect with Geometry Nodes in Blender","authors":"Dalai Felinto, Simon Thommes","doi":"10.1145/3588029.3599739","DOIUrl":"https://doi.org/10.1145/3588029.3599739","url":null,"abstract":"Geometry Nodes is a procedural modeling and animation system that has been part of Blender since 2021, with Blender 2.92. It was initially focused on set dressing, procedural modeling and initial hair grooming. However in Blender 3.6 similation nodes were finally added. The class will be a hands-on show and tell of Geometry Nodes with particular emphasis on the brand new simulation systems.","PeriodicalId":313540,"journal":{"name":"ACM SIGGRAPH 2023 Labs","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125058587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This hands-on exhibition, based upon a physical computing artwork, will allow conference-goers to participate in the creation of an ongoing audio-visual composition.
这个基于物理计算艺术作品的动手展览将允许与会者参与正在进行的视听组成的创作。
{"title":"[in]florescence – a tangible audio-visual installation","authors":"Ryan Buyssens","doi":"10.1145/3588029.3595472","DOIUrl":"https://doi.org/10.1145/3588029.3595472","url":null,"abstract":"This hands-on exhibition, based upon a physical computing artwork, will allow conference-goers to participate in the creation of an ongoing audio-visual composition.","PeriodicalId":313540,"journal":{"name":"ACM SIGGRAPH 2023 Labs","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130376067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chinatsu Ozawa, Kenta Yamamoto, Kazuya Izumi, Y. Ochiai
The proliferation of smartphones has made it easy for anyone to take digital photographs, and the recent popularization of text-to-image models has made it easy for anyone to create images. In this age, by combining digital technology with the tactile experience of handmade processes, we can rediscover the joy of creating with our own hands and the emotional connection that comes from physically interacting with our work. Previously, we proposed a new printing framework that integrated computer processing with full-color cyanotype printing. In this work, we demonstrate expanding the range of aesthetic expressions with computer processing for tone adjustment with several alternative processes such as salt print, platinum print, and cyanotype. In the installation, we present our printing framework with the user interface and exhibit works utilizing our proposed method. The use of new media developed after the digital age and the integration of computer processing in photo printing may be a way to create a new photographic life with the joy of materialising scenery.
{"title":"Give Life Back to Alternative Process: Exploring Handmade Photographic Printing Experiments towards Digital Nature Ecosystem","authors":"Chinatsu Ozawa, Kenta Yamamoto, Kazuya Izumi, Y. Ochiai","doi":"10.1145/3588029.3599735","DOIUrl":"https://doi.org/10.1145/3588029.3599735","url":null,"abstract":"The proliferation of smartphones has made it easy for anyone to take digital photographs, and the recent popularization of text-to-image models has made it easy for anyone to create images. In this age, by combining digital technology with the tactile experience of handmade processes, we can rediscover the joy of creating with our own hands and the emotional connection that comes from physically interacting with our work. Previously, we proposed a new printing framework that integrated computer processing with full-color cyanotype printing. In this work, we demonstrate expanding the range of aesthetic expressions with computer processing for tone adjustment with several alternative processes such as salt print, platinum print, and cyanotype. In the installation, we present our printing framework with the user interface and exhibit works utilizing our proposed method. The use of new media developed after the digital age and the integration of computer processing in photo printing may be a way to create a new photographic life with the joy of materialising scenery.","PeriodicalId":313540,"journal":{"name":"ACM SIGGRAPH 2023 Labs","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127162429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Coding empowers automation. Scripts can handle mundane and repetitive tasks in an efficient and precise manner. This course will offer will use an hands-on interactive format to walk attendees through representative scripting projects, selected to be useful for everyday workflows. It is intended to be an intermediate course. The goal is to cover provide enough information for attendees to build on later. Python scripting can automate many tasks in Maya, from running simple commands to developing plug-ins. Attendees will learn how to automate a simple task using the magic of scripting, through a hands-on projects. The course will placing objects in a scene through scripting MASH (motion graphic) networks. Attendees should walk away with a solid understanding of the power Python scripting and Maya commands provide, and the the ability to conceive their own advance projects for Maya. Attendees should have programming experience, preferably in Python, but a solid grasp of the foundational programming constructs should suffice. Attendees should have Autodesk Maya, Python, and Visual Studio Code pre-loaded on their devices if they intend to follow along.
编码支持自动化。脚本可以以有效和精确的方式处理平凡和重复的任务。本课程将提供一个动手的交互式格式,通过代表性的脚本项目,选择对日常工作流程有用的与会者。这是一门中级课程。我们的目标是为与会者提供足够的信息,以供以后的工作使用。Python脚本可以自动执行Maya中的许多任务,从运行简单的命令到开发插件。与会者将学习如何自动化一个简单的任务使用脚本的魔力,通过动手项目。本课程将通过编写MASH(动态图形)网络脚本将对象放置在场景中。与会者应该带着对Python脚本和Maya命令提供的强大理解离开,并有能力为Maya构思自己的高级项目。与会者应该有编程经验,最好是Python编程经验,但对基础编程结构有扎实的掌握就足够了。与会者应该在他们的设备上预装Autodesk Maya, Python和Visual Studio Code,如果他们打算跟随。
{"title":"Unleashing the Power of Python in Autodesk Maya","authors":"Ann McNamara","doi":"10.1145/3588029.3599745","DOIUrl":"https://doi.org/10.1145/3588029.3599745","url":null,"abstract":"Coding empowers automation. Scripts can handle mundane and repetitive tasks in an efficient and precise manner. This course will offer will use an hands-on interactive format to walk attendees through representative scripting projects, selected to be useful for everyday workflows. It is intended to be an intermediate course. The goal is to cover provide enough information for attendees to build on later. Python scripting can automate many tasks in Maya, from running simple commands to developing plug-ins. Attendees will learn how to automate a simple task using the magic of scripting, through a hands-on projects. The course will placing objects in a scene through scripting MASH (motion graphic) networks. Attendees should walk away with a solid understanding of the power Python scripting and Maya commands provide, and the the ability to conceive their own advance projects for Maya. Attendees should have programming experience, preferably in Python, but a solid grasp of the foundational programming constructs should suffice. Attendees should have Autodesk Maya, Python, and Visual Studio Code pre-loaded on their devices if they intend to follow along.","PeriodicalId":313540,"journal":{"name":"ACM SIGGRAPH 2023 Labs","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116107795","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Paper Animatronics is a new way for elementary school kids to engage with subject matter through project-based learning. We’re upgrading the classic shoe box diorama, and empowering kids to bring it to life by adding servo motors, sound and lights to create compelling characters and shows. In this workshop designed for teachers, parents and other advocates for creative education, you will get to work hands-on with our new paper animatronics kits. These make it easy to create talking characters that you can voice in real-time or use in more complex, scripted shows where things move and light up on cue using a synchronized Arduino program. Through these activities, you will see how kids learn to be creative across both technical and artistic disciplines as they explore class subject matter.
{"title":"Elementary Paper Animatronics","authors":"P. Dietz","doi":"10.1145/3588029.3599744","DOIUrl":"https://doi.org/10.1145/3588029.3599744","url":null,"abstract":"Paper Animatronics is a new way for elementary school kids to engage with subject matter through project-based learning. We’re upgrading the classic shoe box diorama, and empowering kids to bring it to life by adding servo motors, sound and lights to create compelling characters and shows. In this workshop designed for teachers, parents and other advocates for creative education, you will get to work hands-on with our new paper animatronics kits. These make it easy to create talking characters that you can voice in real-time or use in more complex, scripted shows where things move and light up on cue using a synchronized Arduino program. Through these activities, you will see how kids learn to be creative across both technical and artistic disciplines as they explore class subject matter.","PeriodicalId":313540,"journal":{"name":"ACM SIGGRAPH 2023 Labs","volume":"255 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132495720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This SIGGRAPH lab is an introduction to creating reactive graphics in p5.js that respond to inputs from external sensors connected through a hand-built circuit via the Raspberry Pi Pico microcontroller. Participants are guided through the process of building a working hardware and software template that can be further customised for their own creative designs.
{"title":"Reactive Visuals in P5.js with Custom Analog and Digital Inputs","authors":"Kieran Nolan","doi":"10.1145/3588029.3599742","DOIUrl":"https://doi.org/10.1145/3588029.3599742","url":null,"abstract":"This SIGGRAPH lab is an introduction to creating reactive graphics in p5.js that respond to inputs from external sensors connected through a hand-built circuit via the Raspberry Pi Pico microcontroller. Participants are guided through the process of building a working hardware and software template that can be further customised for their own creative designs.","PeriodicalId":313540,"journal":{"name":"ACM SIGGRAPH 2023 Labs","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124188742","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Flamenco is the Open Source render farm software, developed by Blender Studio. It is aimed at performing tasks such as frame rendering and video encoding. This hands-on class will teach how to install and use it, and most importantly, how to adjust and extend it for your specific needs.
{"title":"Flamenco: The Simple Open Source Render Farm","authors":"Sybren A. Stüvel","doi":"10.1145/3588029.3599738","DOIUrl":"https://doi.org/10.1145/3588029.3599738","url":null,"abstract":"Flamenco is the Open Source render farm software, developed by Blender Studio. It is aimed at performing tasks such as frame rendering and video encoding. This hands-on class will teach how to install and use it, and most importantly, how to adjust and extend it for your specific needs.","PeriodicalId":313540,"journal":{"name":"ACM SIGGRAPH 2023 Labs","volume":"145 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127307072","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This lab will provide a hands-on introduction to visualization of spatial data using interactive maps that can be deployed as public web pages. We will use a combination of RStudio, the Shiny package, and the Leaflet open-source library to provide an introduction on how to combine data and maps to create public web pages. Attendees will gain an overview of RStudio, Leaflet, and Shiny Applications. They will learn how to install packages for leaflet and Shiny, create and customize different types of leaflet maps, including a choropleth, and develop a Shiny application deployable on the web.
{"title":"Building Maps on the Web Using RStudio, Leaflet, & Shiny","authors":"Ann McNamara","doi":"10.1145/3588029.3599737","DOIUrl":"https://doi.org/10.1145/3588029.3599737","url":null,"abstract":"This lab will provide a hands-on introduction to visualization of spatial data using interactive maps that can be deployed as public web pages. We will use a combination of RStudio, the Shiny package, and the Leaflet open-source library to provide an introduction on how to combine data and maps to create public web pages. Attendees will gain an overview of RStudio, Leaflet, and Shiny Applications. They will learn how to install packages for leaflet and Shiny, create and customize different types of leaflet maps, including a choropleth, and develop a Shiny application deployable on the web.","PeriodicalId":313540,"journal":{"name":"ACM SIGGRAPH 2023 Labs","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121785832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Michał Seta, Eduardo A. L. Meneses, Emmanuel Durand, Christian Frisson
This hands-on class will allow artists to use open-source tools to create interactive and immersive experiences. These tools have been created and incubated at the Society for Arts and Technology (SAT), a unique non-profit organization in Canada whose mission is to democratize technologies to enable people to experience and author multisensory immersions. During the class we invite participants to use their favorite software on platforms they are already familiar with, to interface with our tools. The toolset will include transmission protocols, video mapping tools, sound spatialization software, and gestural control using pose detection. The class will be organized in two parts: a presentation of the tools and context involving the development and applications, and a hands-on session with an ephemeral immersive space. This event is designed for art researchers, artists, designers, content creators, and other creatives interested in creating immersive spaces using research-developed tools. Participants will learn how to employ open-source tools for different artistic tasks so that they will be able to deploy their own immersive spaces after the class.
{"title":"Sketching Pipelines for Ephemeral Immersive Spaces","authors":"Michał Seta, Eduardo A. L. Meneses, Emmanuel Durand, Christian Frisson","doi":"10.1145/3588029.3599740","DOIUrl":"https://doi.org/10.1145/3588029.3599740","url":null,"abstract":"This hands-on class will allow artists to use open-source tools to create interactive and immersive experiences. These tools have been created and incubated at the Society for Arts and Technology (SAT), a unique non-profit organization in Canada whose mission is to democratize technologies to enable people to experience and author multisensory immersions. During the class we invite participants to use their favorite software on platforms they are already familiar with, to interface with our tools. The toolset will include transmission protocols, video mapping tools, sound spatialization software, and gestural control using pose detection. The class will be organized in two parts: a presentation of the tools and context involving the development and applications, and a hands-on session with an ephemeral immersive space. This event is designed for art researchers, artists, designers, content creators, and other creatives interested in creating immersive spaces using research-developed tools. Participants will learn how to employ open-source tools for different artistic tasks so that they will be able to deploy their own immersive spaces after the class.","PeriodicalId":313540,"journal":{"name":"ACM SIGGRAPH 2023 Labs","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126801224","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI-generated images burst onto the scene about a year ago, with tools like Stable Diffusion, Midjourney, and DALL·E 2 all making their debut in 2022. How do these models work, and how can they be used in a production setting? In this talk, we will give an overview of how models like DALL·E 2 work and how to leverage their architectures to make them truly useful tools in the creative process. Although there are differences between each specific model architecture, the takeaways from understanding this particular stack are transferable to the others.
{"title":"Channeling Creativity Through a Deeper Understanding of AI Image Generation","authors":"Joyce Lee, Natalie Summers","doi":"10.1145/3588029.3599743","DOIUrl":"https://doi.org/10.1145/3588029.3599743","url":null,"abstract":"AI-generated images burst onto the scene about a year ago, with tools like Stable Diffusion, Midjourney, and DALL·E 2 all making their debut in 2022. How do these models work, and how can they be used in a production setting? In this talk, we will give an overview of how models like DALL·E 2 work and how to leverage their architectures to make them truly useful tools in the creative process. Although there are differences between each specific model architecture, the takeaways from understanding this particular stack are transferable to the others.","PeriodicalId":313540,"journal":{"name":"ACM SIGGRAPH 2023 Labs","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123546044","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}