When there is light, everything is visible. I decompose the fundamental element in the visual world to let the invisible become visible. It is a process deconstructing light. I project a white source of light on a surface while using prism and some moving images to "deconstruct" it. "White" is not an independent colour. It is a mixture of colour in the visible spectrum that is composed of the primary colour red, green and blue. Through refraction of light, I separated the white source with the three primary colours to rainbow light using a prism. After that, I took away green light from white light, leaving a mixture of red and blue light. Without green, the light source gradually reflects a new colour called magenta, hence the 'rainbow' becomes a 'duo-coloured rainbow'. Eventually, I erased red from magenta. The line results in pure blue colour. As blue is a primary coloured light that cannot be further decomposed by the prism, it appeared the ultimate light source in a monochromatic 'rainbow' colour.
{"title":"We can't see the rainbow in the white","authors":"Gi Wai Echo Hui","doi":"10.1145/3414686.3427132","DOIUrl":"https://doi.org/10.1145/3414686.3427132","url":null,"abstract":"When there is light, everything is visible. I decompose the fundamental element in the visual world to let the invisible become visible. It is a process deconstructing light. I project a white source of light on a surface while using prism and some moving images to \"deconstruct\" it. \"White\" is not an independent colour. It is a mixture of colour in the visible spectrum that is composed of the primary colour red, green and blue. Through refraction of light, I separated the white source with the three primary colours to rainbow light using a prism. After that, I took away green light from white light, leaving a mixture of red and blue light. Without green, the light source gradually reflects a new colour called magenta, hence the 'rainbow' becomes a 'duo-coloured rainbow'. Eventually, I erased red from magenta. The line results in pure blue colour. As blue is a primary coloured light that cannot be further decomposed by the prism, it appeared the ultimate light source in a monochromatic 'rainbow' colour.","PeriodicalId":376476,"journal":{"name":"SIGGRAPH Asia 2020 Art Gallery","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114201309","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The greatest mystery of life comes when you're least expecting it and disappears when you thought it is here to stay. The heat that ignites it at the beginning is doused by the intimacy it creates. It is a portal, a mirror, a cross to bear, a joy, a heartbreak, and an axe. It cuts through your hard parts, the gristly parts, and lays your beating heart bare. It is both the butterfly that flutters in your tummy, and the acid that melts everything away. That, my friend, is what we call LOVE.
{"title":"Love","authors":"Firdaus Khalid","doi":"10.1145/3414686.3427173","DOIUrl":"https://doi.org/10.1145/3414686.3427173","url":null,"abstract":"The greatest mystery of life comes when you're least expecting it and disappears when you thought it is here to stay. The heat that ignites it at the beginning is doused by the intimacy it creates. It is a portal, a mirror, a cross to bear, a joy, a heartbreak, and an axe. It cuts through your hard parts, the gristly parts, and lays your beating heart bare. It is both the butterfly that flutters in your tummy, and the acid that melts everything away. That, my friend, is what we call LOVE.","PeriodicalId":376476,"journal":{"name":"SIGGRAPH Asia 2020 Art Gallery","volume":"273 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114423503","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Eyes are everywhere. Cheap and accessible technology products with high-resolution imaging capacity are in every corner of our surroundings to surveil us. They watch us, record us, and even recognize us. These artificial gazes became so ubiquitous and so familiar to us that we are not even aware of them in everyday lives. Meanwhile, human senses interact with each other and transfer from one to another. We hear vibrations, feel textures by seeing, smell tastes, taste tactility, and so on. We feel the movement only by seeing stopped escalator. Our vision translates the visual information to activate the motor sensation embedded to somewhere in the body. "Kam" tries to twist the one's familiarity by the phenomenon of the other. The eyeball-shaped camera follows you and imitates your blinks. The unfamiliar and unexpected behavior of this robotic camera gives lively feel to it and, at the same time, becomes eerie and unreal. It also makes you realize your sensation of blink when you find yourself trying to make it blink. Even its mechanical sounds seem to make you feel your blink physically. "Kam" utilizes the face recognition algorithms to see one layer deeper onto our facial expression. It exposes itself by reacting to the expression, when we would not even aware of it otherwise. It makes us pay conscious attention to it and realize our own bodily existence. "Kam" intended this trivial daily happening to become a meaningful experience.
{"title":"Kam","authors":"Taeil Lee","doi":"10.1145/3414686.3427116","DOIUrl":"https://doi.org/10.1145/3414686.3427116","url":null,"abstract":"Eyes are everywhere. Cheap and accessible technology products with high-resolution imaging capacity are in every corner of our surroundings to surveil us. They watch us, record us, and even recognize us. These artificial gazes became so ubiquitous and so familiar to us that we are not even aware of them in everyday lives. Meanwhile, human senses interact with each other and transfer from one to another. We hear vibrations, feel textures by seeing, smell tastes, taste tactility, and so on. We feel the movement only by seeing stopped escalator. Our vision translates the visual information to activate the motor sensation embedded to somewhere in the body. \"Kam\" tries to twist the one's familiarity by the phenomenon of the other. The eyeball-shaped camera follows you and imitates your blinks. The unfamiliar and unexpected behavior of this robotic camera gives lively feel to it and, at the same time, becomes eerie and unreal. It also makes you realize your sensation of blink when you find yourself trying to make it blink. Even its mechanical sounds seem to make you feel your blink physically. \"Kam\" utilizes the face recognition algorithms to see one layer deeper onto our facial expression. It exposes itself by reacting to the expression, when we would not even aware of it otherwise. It makes us pay conscious attention to it and realize our own bodily existence. \"Kam\" intended this trivial daily happening to become a meaningful experience.","PeriodicalId":376476,"journal":{"name":"SIGGRAPH Asia 2020 Art Gallery","volume":"97 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122913364","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Turn Over is a kinetic art that illustrate the change of human and society. Twenty four set of Y-shaped object, which means person in Chinese, turn on a flat surface and gradually makes various pattern. The Chinese character "人," which means a person, is similar to the alphabet letter "Y" rotated 180 degrees. When this character is arranged regularly in a lot on a surface, the lines of the characters starts to look like the boundaries of stacked cubes. Then, if one of these characters is turned 180 degrees, the orientation of one cube is also changed (For example, the top surface becomes to the side surface). This turn over of single character is too small to be noticeable. Sometimes it just seems a kind of contradiction, in-coherent, or treason. But when many characters turn at a time, it breaks boundaries and becomes a drastic change. This illustrates our change as an individual and a society.
{"title":"Turn over","authors":"Yuichiro Katsumoto","doi":"10.1145/3414686.3427119","DOIUrl":"https://doi.org/10.1145/3414686.3427119","url":null,"abstract":"Turn Over is a kinetic art that illustrate the change of human and society. Twenty four set of Y-shaped object, which means person in Chinese, turn on a flat surface and gradually makes various pattern. The Chinese character \"人,\" which means a person, is similar to the alphabet letter \"Y\" rotated 180 degrees. When this character is arranged regularly in a lot on a surface, the lines of the characters starts to look like the boundaries of stacked cubes. Then, if one of these characters is turned 180 degrees, the orientation of one cube is also changed (For example, the top surface becomes to the side surface). This turn over of single character is too small to be noticeable. Sometimes it just seems a kind of contradiction, in-coherent, or treason. But when many characters turn at a time, it breaks boundaries and becomes a drastic change. This illustrates our change as an individual and a society.","PeriodicalId":376476,"journal":{"name":"SIGGRAPH Asia 2020 Art Gallery","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127854996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Humans and machines are in constant conversations. Humans start the dialogue by using programming languages that will be compiled to binary digits that machines can interpret. However, Intelligent machines today are not only observers of the world, but they also make their own decisions. If A.I imitates human beings to create a symbolic system to communicate based on their own understandings of the universe and start to actively interact with us, how will this recontextualize and redefine our coexistence in this intertwined reality? To what degree can the machine enchant us in curiosity and enhance our expectations of a semantic meaning-making process? Cangjie provides a data-driven interactive spatial visualization in semantic human-machine reality. The visualization is generated by an intelligent system in real-time through perceiving the real-world via a camera (located in the exhibition space). Inspired by Cangjie, an ancient Chinese legendary historian (c.2650 BCE), who invented Chinese characters based on the characteristics of everything on the earth, we trained a neural network, we have named Cangjie, to learn the constructions and principles of all the Chinese characters. It transforms what it perceives into a collage of unique symbols made of Chinese strokes. The symbols produced through the lens of Cangjie, tangled with the imagery captured by the camera, are visualized algorithmically as abstracted, pixelated semiotics, continuously evolving and composing an everchanging poetic virtual reality. Cangjie is not only a conceptual response to the tension and fragility in the coexistence of humans and machines but also an artistic imagined expression of a future language that reflects on ancient truths in this artificial intelligence era. The interactivity of this intelligent visualization prioritizes ambiguity and tension that exist between the actual and the virtual, machinic vision and human perception, and past and future.
{"title":"Cangjie","authors":"Weidi Zhang, Donghao Ren","doi":"10.1145/3414686.3427153","DOIUrl":"https://doi.org/10.1145/3414686.3427153","url":null,"abstract":"Humans and machines are in constant conversations. Humans start the dialogue by using programming languages that will be compiled to binary digits that machines can interpret. However, Intelligent machines today are not only observers of the world, but they also make their own decisions. If A.I imitates human beings to create a symbolic system to communicate based on their own understandings of the universe and start to actively interact with us, how will this recontextualize and redefine our coexistence in this intertwined reality? To what degree can the machine enchant us in curiosity and enhance our expectations of a semantic meaning-making process? Cangjie provides a data-driven interactive spatial visualization in semantic human-machine reality. The visualization is generated by an intelligent system in real-time through perceiving the real-world via a camera (located in the exhibition space). Inspired by Cangjie, an ancient Chinese legendary historian (c.2650 BCE), who invented Chinese characters based on the characteristics of everything on the earth, we trained a neural network, we have named Cangjie, to learn the constructions and principles of all the Chinese characters. It transforms what it perceives into a collage of unique symbols made of Chinese strokes. The symbols produced through the lens of Cangjie, tangled with the imagery captured by the camera, are visualized algorithmically as abstracted, pixelated semiotics, continuously evolving and composing an everchanging poetic virtual reality. Cangjie is not only a conceptual response to the tension and fragility in the coexistence of humans and machines but also an artistic imagined expression of a future language that reflects on ancient truths in this artificial intelligence era. The interactivity of this intelligent visualization prioritizes ambiguity and tension that exist between the actual and the virtual, machinic vision and human perception, and past and future.","PeriodicalId":376476,"journal":{"name":"SIGGRAPH Asia 2020 Art Gallery","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115860334","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
LightTank is an interactive Extended Reality (XR) installation that augments a large lightweight aluminium structure with holographic line drawings. It consists of four transparent projection walls which are assembled to an X shape tower like construction of 7.5 x 7.5 x 5.5 m. The project was developed by the arc/sec Lab in collaboration with the Augmented Human Lab for the Ars Electronica Festival and presented in the Cathedral of Linz in Austria. It aims to expand principles of augmented reality (AR) headsets from a single person viewing experience, towards a communal interactive event. To achieve this goal, LightTank uses an anaglyph stereoscopic projection method, which combined with simple red/cyan cardboard glasses, allows the creation of 3D virtual constructions. The holographic line drawings are designed to merge with its physical environment, whether it is the geometrical grids of the aluminium structure or the gothic architecture of the cathedral. Certain drawings seem to peel off the existing physical structure, while others travel through the cathedral and line up with the characteristic elements like columns, groined arches and rose windows. The project follows a hybrid design strategy which places equal attention to both design aspects, the physical and the digital. The aim of the setup is to explore user responsive architecture, where dynamic properties of the virtual world are an integral part of the physical environment. LightTank creates hereby a multi-viewer environment which enables visitors to navigate through holographic architectural narratives.
LightTank是一个交互式扩展现实(XR)装置,它通过全息线条图增强了一个大型轻质铝结构。它由四个透明的投影墙组成,这些墙组装成一个7.5 X 7.5 X 5.5米的X形塔状结构。该项目由arc/sec实验室与增强人类实验室合作开发,并在奥地利林茨大教堂展出。它旨在将增强现实(AR)耳机的原理从单人观看体验扩展到公共互动事件。为了实现这一目标,LightTank使用了一种立体投影方法,结合简单的红/青色纸板眼镜,可以创建3D虚拟结构。无论是铝制结构的几何网格还是大教堂的哥特式建筑,全息线条图的设计都与物理环境相融合。某些图纸似乎剥离了现有的物理结构,而其他图纸则穿过大教堂,并与柱子、沟槽拱门和玫瑰窗等特征元素对齐。该项目遵循混合设计策略,将物理和数字设计两个方面同等重视。设置的目的是探索用户响应架构,其中虚拟世界的动态属性是物理环境的一个组成部分。LightTank因此创造了一个多观众的环境,使游客能够在全息建筑叙事中导航。
{"title":"LightTank","authors":"Uwe Rieger, Yinan Liu, Roger Boldu, Haimo Zhang, Heetesh Alwani, Suranga Nanayakkara","doi":"10.1145/3414686.3427113","DOIUrl":"https://doi.org/10.1145/3414686.3427113","url":null,"abstract":"LightTank is an interactive Extended Reality (XR) installation that augments a large lightweight aluminium structure with holographic line drawings. It consists of four transparent projection walls which are assembled to an X shape tower like construction of 7.5 x 7.5 x 5.5 m. The project was developed by the arc/sec Lab in collaboration with the Augmented Human Lab for the Ars Electronica Festival and presented in the Cathedral of Linz in Austria. It aims to expand principles of augmented reality (AR) headsets from a single person viewing experience, towards a communal interactive event. To achieve this goal, LightTank uses an anaglyph stereoscopic projection method, which combined with simple red/cyan cardboard glasses, allows the creation of 3D virtual constructions. The holographic line drawings are designed to merge with its physical environment, whether it is the geometrical grids of the aluminium structure or the gothic architecture of the cathedral. Certain drawings seem to peel off the existing physical structure, while others travel through the cathedral and line up with the characteristic elements like columns, groined arches and rose windows. The project follows a hybrid design strategy which places equal attention to both design aspects, the physical and the digital. The aim of the setup is to explore user responsive architecture, where dynamic properties of the virtual world are an integral part of the physical environment. LightTank creates hereby a multi-viewer environment which enables visitors to navigate through holographic architectural narratives.","PeriodicalId":376476,"journal":{"name":"SIGGRAPH Asia 2020 Art Gallery","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117215521","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Indeed, people spend more time on deep thinking since 2020. The questions which ask mainly by the sociologists, now become the topics on the dining table. The debates on social and moral dilemmas are happening intensively 24 hours on the internet. We started to think more about who we are, where we are going, and how we will value the information we have received. Do we have freedom? Shall we believe absolute freedom? Sometimes people directly transform the idea of liberty into democracy. However, shall we also equal freedom to democracy? Since we are all inside this one pandemic bubble, after most people stay at home for a couple of months, we start emerging a global-size collective memory, which makes people more empathetically understand others' situations. Meanwhile, more and more people have to learn and take experience virtually. The attention of empathy and the new work-from-home mode evokes the initial idea of this virtual reality experience. We start to ask how people could learn and think more effectively in this brand new virtual age? Unity program makes this innovation possible. The innovative architecture modeling could permit a large group of people to experience personal space and sharing areas simultaneously. The sound design is specially designed for the various space sound and the audience's interactivities. We use this program to build up an immersive and empathetic space that embodies a hypothetical argument of a social dilemma into a virtual manifestation. People might be able to figure out the most meaningful answer by wearing the same shoes. The social distance could also be virtually controlled in this program by counting if the number of participates overload spaces.
{"title":"The world of freedom","authors":"Borou Yu, Tiange Zhou, Zeyu Wang, Jiajian Min","doi":"10.1145/3414686.3427172","DOIUrl":"https://doi.org/10.1145/3414686.3427172","url":null,"abstract":"Indeed, people spend more time on deep thinking since 2020. The questions which ask mainly by the sociologists, now become the topics on the dining table. The debates on social and moral dilemmas are happening intensively 24 hours on the internet. We started to think more about who we are, where we are going, and how we will value the information we have received. Do we have freedom? Shall we believe absolute freedom? Sometimes people directly transform the idea of liberty into democracy. However, shall we also equal freedom to democracy? Since we are all inside this one pandemic bubble, after most people stay at home for a couple of months, we start emerging a global-size collective memory, which makes people more empathetically understand others' situations. Meanwhile, more and more people have to learn and take experience virtually. The attention of empathy and the new work-from-home mode evokes the initial idea of this virtual reality experience. We start to ask how people could learn and think more effectively in this brand new virtual age? Unity program makes this innovation possible. The innovative architecture modeling could permit a large group of people to experience personal space and sharing areas simultaneously. The sound design is specially designed for the various space sound and the audience's interactivities. We use this program to build up an immersive and empathetic space that embodies a hypothetical argument of a social dilemma into a virtual manifestation. People might be able to figure out the most meaningful answer by wearing the same shoes. The social distance could also be virtually controlled in this program by counting if the number of participates overload spaces.","PeriodicalId":376476,"journal":{"name":"SIGGRAPH Asia 2020 Art Gallery","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114518277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
AI is essentially 'intelligence' programmed by humans. Although AI which can joke, communicate and tell a story resembles humans, can we continue to have a natural conversation with them, even though it is identified as AI? The current AI is only applied to a certain field, as it is at the step of 'weak intelligence'. It is expected to be developed into 'general or strong intelligence' imitating humans' whole intelligent activity in the future. This work allows us to indirectly experience to consider whether we would treat AI as we do humans, if it is developed into 'strong intelligence', through the conversation with AI which is able to learn emotional words.
{"title":"Life","authors":"J. Park, Kyoungmin Bang","doi":"10.1145/3414686.3427157","DOIUrl":"https://doi.org/10.1145/3414686.3427157","url":null,"abstract":"AI is essentially 'intelligence' programmed by humans. Although AI which can joke, communicate and tell a story resembles humans, can we continue to have a natural conversation with them, even though it is identified as AI? The current AI is only applied to a certain field, as it is at the step of 'weak intelligence'. It is expected to be developed into 'general or strong intelligence' imitating humans' whole intelligent activity in the future. This work allows us to indirectly experience to consider whether we would treat AI as we do humans, if it is developed into 'strong intelligence', through the conversation with AI which is able to learn emotional words.","PeriodicalId":376476,"journal":{"name":"SIGGRAPH Asia 2020 Art Gallery","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125811177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Seiya Aoki, Yusuke Yamada, Santa Naruse, Reo Anzai, Aina Ono
This work is an online installation that creates a new audio-visual using automatic video selection with deep learning. Video expression in the audio-visual and DJ+VJ, where sound and images coexist together, has been based on sampling methods by combining clips that already exist, generative methods that are computed in real time by computer, and the use of the sound of the phenomenon and the situation itself. Its visual effects have extended music and given it new meanings. However, in all of these methods, the selection of the video and the program itself was premised on the artist's arbitrary decision to match the music. This work is an online installation that eliminates the arbitrariness of the artist, creating a new audio-visual work by comparing in the same space the feature of the music and the feature of a number of images selected by the artist beforehand and selecting them automatically. In this work, the sounds of the youtube video selected by the viewer are separated every few seconds, and the closest video is selected by comparing these features with the features of countless short clips of movies and videos prepared in advance in the same space. This video selection method reconstructs the mapping relationship that artists have constructed so far between video and sound using deep learning, and suggests the possibility of possible correspondences. In addition, unconnected scenes from different films and images that have never been connected before become a single image, and emerge as a whole, and the viewer finds a story in the relationship between them. With this work, audio-visual and DJ+VJ expression is freed from the arbitrary decision, and a new perspective is given to the artists.
{"title":"holarchy","authors":"Seiya Aoki, Yusuke Yamada, Santa Naruse, Reo Anzai, Aina Ono","doi":"10.1145/3414686.3427159","DOIUrl":"https://doi.org/10.1145/3414686.3427159","url":null,"abstract":"This work is an online installation that creates a new audio-visual using automatic video selection with deep learning. Video expression in the audio-visual and DJ+VJ, where sound and images coexist together, has been based on sampling methods by combining clips that already exist, generative methods that are computed in real time by computer, and the use of the sound of the phenomenon and the situation itself. Its visual effects have extended music and given it new meanings. However, in all of these methods, the selection of the video and the program itself was premised on the artist's arbitrary decision to match the music. This work is an online installation that eliminates the arbitrariness of the artist, creating a new audio-visual work by comparing in the same space the feature of the music and the feature of a number of images selected by the artist beforehand and selecting them automatically. In this work, the sounds of the youtube video selected by the viewer are separated every few seconds, and the closest video is selected by comparing these features with the features of countless short clips of movies and videos prepared in advance in the same space. This video selection method reconstructs the mapping relationship that artists have constructed so far between video and sound using deep learning, and suggests the possibility of possible correspondences. In addition, unconnected scenes from different films and images that have never been connected before become a single image, and emerge as a whole, and the viewer finds a story in the relationship between them. With this work, audio-visual and DJ+VJ expression is freed from the arbitrary decision, and a new perspective is given to the artists.","PeriodicalId":376476,"journal":{"name":"SIGGRAPH Asia 2020 Art Gallery","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125956580","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
"Distance Music: Preferred Population Density for The Acoustic Hygiene" is an interactive ambient music installation, which interacts with the number of listeners in the installation. The installation consists of three parts, which is the music engine, sensor, and a video loop. While people watch some archives of vintage government educational videos about social hygiene inside the room, the sensor will measure density of the room and send it to the pre-recorded music engine, which will interact with the data from density sensor, evolving into more intense music as the population density rises. As a result, people will hear unpleasant noise the more they are close to each other. This installation is inspired from 'social distancing', a global experience during this ongoing pandemic era, as some of the music engine process represents and simulates 'distancing alarm' for social distancing, therefore acts as an experiment of public alarm for social distancing. We believe that the people's memory with this unprecedented worldwide incident will resonate efficiently with the installation, and a great deal of inspiration as well.
{"title":"Distance music: preferred population density for the acoustic hygiene","authors":"Sung-Gil Jang, Dongmin Kim","doi":"10.1145/3414686.3427134","DOIUrl":"https://doi.org/10.1145/3414686.3427134","url":null,"abstract":"\"Distance Music: Preferred Population Density for The Acoustic Hygiene\" is an interactive ambient music installation, which interacts with the number of listeners in the installation. The installation consists of three parts, which is the music engine, sensor, and a video loop. While people watch some archives of vintage government educational videos about social hygiene inside the room, the sensor will measure density of the room and send it to the pre-recorded music engine, which will interact with the data from density sensor, evolving into more intense music as the population density rises. As a result, people will hear unpleasant noise the more they are close to each other. This installation is inspired from 'social distancing', a global experience during this ongoing pandemic era, as some of the music engine process represents and simulates 'distancing alarm' for social distancing, therefore acts as an experiment of public alarm for social distancing. We believe that the people's memory with this unprecedented worldwide incident will resonate efficiently with the installation, and a great deal of inspiration as well.","PeriodicalId":376476,"journal":{"name":"SIGGRAPH Asia 2020 Art Gallery","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121690584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}