Scan was inspired by an accident I encountered in 2019. After my arm was injured, I barely remember the details of the accident. Through this work, I'd like to explore the relationship between our body memory and the memory we fill up with our own imaginations. As the strips reveal the scar, the memory of the event is not clear anymore, it's filled with our own interpretation.
{"title":"Scan","authors":"Yalan Wen","doi":"10.1145/3414686.3427123","DOIUrl":"https://doi.org/10.1145/3414686.3427123","url":null,"abstract":"Scan was inspired by an accident I encountered in 2019. After my arm was injured, I barely remember the details of the accident. Through this work, I'd like to explore the relationship between our body memory and the memory we fill up with our own imaginations. As the strips reveal the scar, the memory of the event is not clear anymore, it's filled with our own interpretation.","PeriodicalId":376476,"journal":{"name":"SIGGRAPH Asia 2020 Art Gallery","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126068117","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Uncertain Facing is a data-driven, interactive audiovisual installation that aims to represent the uncertainty of data points of which their positions in 3D space are estimated by machine learning techniques. It also tries to raise concerns about the possibility of the unintended use of machine learning with synthetic/fake data. Uncertain Facing visualizes the realtime clustering of fake faces in 3D space through t-SNE, a non-linear dimensionality reduction technique, with face embeddings of the faces. This clustering reveals what faces are similar to each other based on the assumption of a probability distribution over data points. However, unlike the original purpose of t-SNE that is meant to be used in an objective data exploration in machine learning, it represents data points as metaballs, in which two or more face images become a merged face when they are close enough, to reflect the uncertain and probabilistic nature of data locations the t-SNE algorithm yields. As a result, metaball rendering is used as a means of an abstract, probabilistic representation of data as opposed to exactness that we expect from the use of scientific visualizations. Along with the t-SNE and metaball-based visualization, Uncertain Facing sonifies the change of the overall data distribution in 3D space based on a granular sound synthesis technique. Uncertain Facing also reflects error values, which t-SNE measures at each iteration between a distribution in original high dimensions and a deduced low-dimensional distribution, to represent the uncertainty of data as jittery motion and inharmonic sound. As an interactive installation, Uncertain Facing allows the audience to see the relationship between their face and the fake faces, implying an aspect that machine learning could be misused in an unintended way as face recognition technology does not distinguish between real and fake faces.
{"title":"Uncertain facing","authors":"si-chan park","doi":"10.1145/3414686.3427161","DOIUrl":"https://doi.org/10.1145/3414686.3427161","url":null,"abstract":"Uncertain Facing is a data-driven, interactive audiovisual installation that aims to represent the uncertainty of data points of which their positions in 3D space are estimated by machine learning techniques. It also tries to raise concerns about the possibility of the unintended use of machine learning with synthetic/fake data. Uncertain Facing visualizes the realtime clustering of fake faces in 3D space through t-SNE, a non-linear dimensionality reduction technique, with face embeddings of the faces. This clustering reveals what faces are similar to each other based on the assumption of a probability distribution over data points. However, unlike the original purpose of t-SNE that is meant to be used in an objective data exploration in machine learning, it represents data points as metaballs, in which two or more face images become a merged face when they are close enough, to reflect the uncertain and probabilistic nature of data locations the t-SNE algorithm yields. As a result, metaball rendering is used as a means of an abstract, probabilistic representation of data as opposed to exactness that we expect from the use of scientific visualizations. Along with the t-SNE and metaball-based visualization, Uncertain Facing sonifies the change of the overall data distribution in 3D space based on a granular sound synthesis technique. Uncertain Facing also reflects error values, which t-SNE measures at each iteration between a distribution in original high dimensions and a deduced low-dimensional distribution, to represent the uncertainty of data as jittery motion and inharmonic sound. As an interactive installation, Uncertain Facing allows the audience to see the relationship between their face and the fake faces, implying an aspect that machine learning could be misused in an unintended way as face recognition technology does not distinguish between real and fake faces.","PeriodicalId":376476,"journal":{"name":"SIGGRAPH Asia 2020 Art Gallery","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122861693","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Drama mask culture is a classification in the long history of Chinese traditional culture. But now many young people had been almost forgotten this kind of culture, our purpose is to think through a can make young people more likely to focus on a way, the opera masks appear in public, utilizing new media in this work, we break the traditional opera masks, allow the user to the style of be fond of according to oneself design their own opera masks, generated by AI technology will play facebook elements (eyes nose mouth lines) into various styles: Pixel style, style and line fault style more popular modern fashion elements, such as the user side/tablet and mobile phone use through the program can design their own opera masks, and then click send projection on the wall, appreciation of their own design art show facebook generation process and form, get belongs to own a new opera masks media work. This work wants to let you know: in fact, traditional culture has not been outdated, and can be very cool.
{"title":"Facebook art","authors":"Yunqing Xu, Yi Ji, Jingxin Lan, Qiaoling Zhong","doi":"10.1145/3414686.3427125","DOIUrl":"https://doi.org/10.1145/3414686.3427125","url":null,"abstract":"Drama mask culture is a classification in the long history of Chinese traditional culture. But now many young people had been almost forgotten this kind of culture, our purpose is to think through a can make young people more likely to focus on a way, the opera masks appear in public, utilizing new media in this work, we break the traditional opera masks, allow the user to the style of be fond of according to oneself design their own opera masks, generated by AI technology will play facebook elements (eyes nose mouth lines) into various styles: Pixel style, style and line fault style more popular modern fashion elements, such as the user side/tablet and mobile phone use through the program can design their own opera masks, and then click send projection on the wall, appreciation of their own design art show facebook generation process and form, get belongs to own a new opera masks media work. This work wants to let you know: in fact, traditional culture has not been outdated, and can be very cool.","PeriodicalId":376476,"journal":{"name":"SIGGRAPH Asia 2020 Art Gallery","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114472217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
During 3 months of being quarantine in our tiny apartment, here comes the project, "SkyWindow". The concept of the "SkyWindow" is the idea of being a mental escape from reality, especially under the unprecedented time. Being quarantine in an entire enclosure space continuously for numerous hours and days, people are desperately looking for reliefs in any possible ways. Through the artist's interactive design, looking up to the imaginary sky could be the most enjoyable solution to get the immediate comfort without going out. The "SkyWindow" is an immersive and intimate experience with sky-like projections on the ceiling like putting a void hole to it as an interactive installation. A dark environment with the projected sky/universe on the ceiling intriguing the audience to walk closer underneath. Further, the visual graphic will induce the audience to reach out to their hands like touching the sky to trigger the raindrops (meteor shower) and sounds falling from the "SkyWindow." The "SkyWindow" here metaphorically represents a piece of "hope" people can expect during the pandemic. No matter a planet far away in the dark or sunlight in the bright, it gives you unexpected joy and surprise in the design. Besides exposing under different spatial scenes, through this "SkyWindow," waving hands in the air will trigger the (meteor) shower falling from the Sky which ironically implies the power of control that people have been losing it for a while under such an unpredictable moment. And the (meteor) shower implicitly refers to wash out all the illness and sadness for returning the clean and pure spirits.
{"title":"SkyWindow","authors":"Jia-Rey Chang","doi":"10.1145/3414686.3427133","DOIUrl":"https://doi.org/10.1145/3414686.3427133","url":null,"abstract":"During 3 months of being quarantine in our tiny apartment, here comes the project, \"SkyWindow\". The concept of the \"SkyWindow\" is the idea of being a mental escape from reality, especially under the unprecedented time. Being quarantine in an entire enclosure space continuously for numerous hours and days, people are desperately looking for reliefs in any possible ways. Through the artist's interactive design, looking up to the imaginary sky could be the most enjoyable solution to get the immediate comfort without going out. The \"SkyWindow\" is an immersive and intimate experience with sky-like projections on the ceiling like putting a void hole to it as an interactive installation. A dark environment with the projected sky/universe on the ceiling intriguing the audience to walk closer underneath. Further, the visual graphic will induce the audience to reach out to their hands like touching the sky to trigger the raindrops (meteor shower) and sounds falling from the \"SkyWindow.\" The \"SkyWindow\" here metaphorically represents a piece of \"hope\" people can expect during the pandemic. No matter a planet far away in the dark or sunlight in the bright, it gives you unexpected joy and surprise in the design. Besides exposing under different spatial scenes, through this \"SkyWindow,\" waving hands in the air will trigger the (meteor) shower falling from the Sky which ironically implies the power of control that people have been losing it for a while under such an unpredictable moment. And the (meteor) shower implicitly refers to wash out all the illness and sadness for returning the clean and pure spirits.","PeriodicalId":376476,"journal":{"name":"SIGGRAPH Asia 2020 Art Gallery","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130619768","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
It is an inevitable fact that social interaction nowadays is heavily mediated and distorted through the social network system. The presence of various media replicates our images to reproduce under a confirmation bias in our cognitive process. There has been a distorted gap between the original and reproduced images to reveal our identity in this process. The relationship form through this refracted self-image affects the user and the other users and eventually forms a complicated surveillance system. "CURVEillance" is an interactive art installation that criticizes this phenomenon by the vision of cameras that track the audience who approaches the surveillance cameras by following steps: 1) Digital images on the media wall are shown by re-pixeled visualization of the audience image through the vision of cameras, and then, 2) In order to capture the movements of audiences, the camera system actively moves and follows them. Specifically, each single camera lens automatically reacts and stares at the most active individual in the exhibition space by real-time. The media wall presents the reflected images of audiences as the observed objects by a crowd of cameras. In this process, images are scattered and overlapped so that it is hard to recognize the original form. Some participants attempt to get the camera's attention even if their image can be damaged while others are unintentionally monitored. Therefore, it induces a reversed interaction between participants and the media wall that brings tension from the technical eye. Ultimately, the installation aims to raise questions about the distortions of relationships of individuals who usually continue to use the media system in the digital era.
{"title":"CURVEilance","authors":"Hyunchul Kim, Seonghyeon Kim, Ji Young Jun, J. Oh","doi":"10.1145/3414686.3427108","DOIUrl":"https://doi.org/10.1145/3414686.3427108","url":null,"abstract":"It is an inevitable fact that social interaction nowadays is heavily mediated and distorted through the social network system. The presence of various media replicates our images to reproduce under a confirmation bias in our cognitive process. There has been a distorted gap between the original and reproduced images to reveal our identity in this process. The relationship form through this refracted self-image affects the user and the other users and eventually forms a complicated surveillance system. \"CURVEillance\" is an interactive art installation that criticizes this phenomenon by the vision of cameras that track the audience who approaches the surveillance cameras by following steps: 1) Digital images on the media wall are shown by re-pixeled visualization of the audience image through the vision of cameras, and then, 2) In order to capture the movements of audiences, the camera system actively moves and follows them. Specifically, each single camera lens automatically reacts and stares at the most active individual in the exhibition space by real-time. The media wall presents the reflected images of audiences as the observed objects by a crowd of cameras. In this process, images are scattered and overlapped so that it is hard to recognize the original form. Some participants attempt to get the camera's attention even if their image can be damaged while others are unintentionally monitored. Therefore, it induces a reversed interaction between participants and the media wall that brings tension from the technical eye. Ultimately, the installation aims to raise questions about the distortions of relationships of individuals who usually continue to use the media system in the digital era.","PeriodicalId":376476,"journal":{"name":"SIGGRAPH Asia 2020 Art Gallery","volume":"155 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121779610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Highlighting endangered species in New York State and beyond, 'The Sinking Garden' is a new technology project integrating Virtual Reality application with fine art language to depict ecosystems that are at risk of survival. The project is based on and inspired by research conducted by New York State Department of Environmental Conservation, The Cornell Lab of Ornithology and The International Union for Conservation of Nature (IUCN). Focusing on specific endangered animals and plants that exhibit extraordinary beauty and significance in biodiversity, the project metaphorically depicts critical environmental issues. The Sinking Garden uniquely combines painting and new imaging technology to portray endangered species. At first, a series of paintings were created in a distinctive style inspired by folk art traditions from diverse cultures. Second, through conducting research and consulting scientists in nature conservation, the digital portraits of endangered species were chosen and produced as VR components. Finally, the VR platform brings animals and plants come to life in the 3D environment within cyberspace. The Sinking Garden project is intended to expand the capacity of visual art by utilizing new imaging technology in our age. Interweaving aesthetics with educational experience, the new media art project aims at inspiring viewers to cherish the natural world that we call home.
{"title":"The sinking garden","authors":"Xiying Yang, Honglei Li, He Li","doi":"10.1145/3414686.3427144","DOIUrl":"https://doi.org/10.1145/3414686.3427144","url":null,"abstract":"Highlighting endangered species in New York State and beyond, 'The Sinking Garden' is a new technology project integrating Virtual Reality application with fine art language to depict ecosystems that are at risk of survival. The project is based on and inspired by research conducted by New York State Department of Environmental Conservation, The Cornell Lab of Ornithology and The International Union for Conservation of Nature (IUCN). Focusing on specific endangered animals and plants that exhibit extraordinary beauty and significance in biodiversity, the project metaphorically depicts critical environmental issues. The Sinking Garden uniquely combines painting and new imaging technology to portray endangered species. At first, a series of paintings were created in a distinctive style inspired by folk art traditions from diverse cultures. Second, through conducting research and consulting scientists in nature conservation, the digital portraits of endangered species were chosen and produced as VR components. Finally, the VR platform brings animals and plants come to life in the 3D environment within cyberspace. The Sinking Garden project is intended to expand the capacity of visual art by utilizing new imaging technology in our age. Interweaving aesthetics with educational experience, the new media art project aims at inspiring viewers to cherish the natural world that we call home.","PeriodicalId":376476,"journal":{"name":"SIGGRAPH Asia 2020 Art Gallery","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115790347","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
"The Synthetic Cameraman" is a full-screen, real-time, 3D graphics simulation that critically challenges the notions of remediation, processuality, linearity, and creative agency in computer-generated virtual environments. The application is rendering a virtual scene depicting a volcanic mountain landscape with a centrally located volcanic cone that is violently erupting with pyroclastic flow and rocks of different sizes being expelled as molten lava rivers are traveling down the slope forming a lava lake at the foot of the cone. The visual aspect of the phenomenon is enhanced by deep sounds of rumbling earth and rocks hitting the bottom of the caldera and falling down the slope. The viewing takes about 3--5 minutes and is divided into three sections with the middle section constituting the core experience where the control over individual elements in the scene is given over to the algorithms. The weather conditions, eruption, and the settings of virtual camera - its dynamic movement and image properties - are procedurally generated in real-time. The range of possible values that the camera is using can go beyond the capabilities of physical cameras, which makes it a hypermediated representational apparatus, producing partially abstract, semi-photorealistic ever changing fluid visuals originating from a broadened aesthetic spectrum. The algorithms are also controlling various post-processing effects that are procedurally applied to the camera feed. All of these processes are taking place in real-time, therefore every second of the experience is conceived through a unique entanglement of settings and parameters directing both the eruption and its representation. Each second of the simulation as perceived by the viewer is a one-time event, that constitutes this ever-lasting visual spectacle. The artwork can be displayed in a physical setting using a TV / projector or in a virtual setup as a continuous image feed (stream) produced by the application.
{"title":"The synthetic cameraman","authors":"Lukasz Mirocha","doi":"10.1145/3414686.3427127","DOIUrl":"https://doi.org/10.1145/3414686.3427127","url":null,"abstract":"\"The Synthetic Cameraman\" is a full-screen, real-time, 3D graphics simulation that critically challenges the notions of remediation, processuality, linearity, and creative agency in computer-generated virtual environments. The application is rendering a virtual scene depicting a volcanic mountain landscape with a centrally located volcanic cone that is violently erupting with pyroclastic flow and rocks of different sizes being expelled as molten lava rivers are traveling down the slope forming a lava lake at the foot of the cone. The visual aspect of the phenomenon is enhanced by deep sounds of rumbling earth and rocks hitting the bottom of the caldera and falling down the slope. The viewing takes about 3--5 minutes and is divided into three sections with the middle section constituting the core experience where the control over individual elements in the scene is given over to the algorithms. The weather conditions, eruption, and the settings of virtual camera - its dynamic movement and image properties - are procedurally generated in real-time. The range of possible values that the camera is using can go beyond the capabilities of physical cameras, which makes it a hypermediated representational apparatus, producing partially abstract, semi-photorealistic ever changing fluid visuals originating from a broadened aesthetic spectrum. The algorithms are also controlling various post-processing effects that are procedurally applied to the camera feed. All of these processes are taking place in real-time, therefore every second of the experience is conceived through a unique entanglement of settings and parameters directing both the eruption and its representation. Each second of the simulation as perceived by the viewer is a one-time event, that constitutes this ever-lasting visual spectacle. The artwork can be displayed in a physical setting using a TV / projector or in a virtual setup as a continuous image feed (stream) produced by the application.","PeriodicalId":376476,"journal":{"name":"SIGGRAPH Asia 2020 Art Gallery","volume":"87 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131589205","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This work explores a new process of creativity generation under the guide of Jordanous's Four PPPPerspectives (2016) and speculates intertwined relationships among multi-contributors in computational creativity. This work can also be seen as an experimental multispecies storytelling on creativity. This experiment collected the images from OpenProcessing community as training samples and fed them into styleGAN to generate many images. Then these images are postprocessed as environment-driven interactive moving images by optical flow algorithm. In this computational system, it is speculated that humans are inspiring themselves and that all other nonegos are used as bridges and catalysts in a closed loop, to some extent. I hope this work can motivate audiences to think about the definition of creativity and reflect on human's unique ability on creation by comparing with machine and nature's creative abilities.
{"title":"Augmented creativity","authors":"Yanyi Lu","doi":"10.1145/3414686.3427177","DOIUrl":"https://doi.org/10.1145/3414686.3427177","url":null,"abstract":"This work explores a new process of creativity generation under the guide of Jordanous's Four PPPPerspectives (2016) and speculates intertwined relationships among multi-contributors in computational creativity. This work can also be seen as an experimental multispecies storytelling on creativity. This experiment collected the images from OpenProcessing community as training samples and fed them into styleGAN to generate many images. Then these images are postprocessed as environment-driven interactive moving images by optical flow algorithm. In this computational system, it is speculated that humans are inspiring themselves and that all other nonegos are used as bridges and catalysts in a closed loop, to some extent. I hope this work can motivate audiences to think about the definition of creativity and reflect on human's unique ability on creation by comparing with machine and nature's creative abilities.","PeriodicalId":376476,"journal":{"name":"SIGGRAPH Asia 2020 Art Gallery","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128810039","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Roads in You is an interactive biometric-data artwork that allows participants to scan their veins and find the roads that match their vein lines. The vein data as one of the fascinating forms of biometric data contain uniquely complicated lines that resemble the roads and paths surrounding us. The roads resemble how our vein lines are interconnected and how the blood circulates in our bodies in various directions, at various speeds, and in different conditions. This artwork explores the line segmentation and the structure of veins and compares them to roads in the real world. The participants can also export the data and keep them as a personalized souvenir (3d printed sculptures) as part of the artistic experience. Through this project, users can explore the correlation between individuals and environments using the hidden patterns under the skin and vein recognition techniques and image processing. This project also has the potential to lead the way in the interpretation of complicated datasets while providing aesthetically beautiful and mesmerizing visualizations.
{"title":"Roads in you","authors":"Yoon Chung Han, R. Cottone, Anusha","doi":"10.1145/3414686.3427143","DOIUrl":"https://doi.org/10.1145/3414686.3427143","url":null,"abstract":"Roads in You is an interactive biometric-data artwork that allows participants to scan their veins and find the roads that match their vein lines. The vein data as one of the fascinating forms of biometric data contain uniquely complicated lines that resemble the roads and paths surrounding us. The roads resemble how our vein lines are interconnected and how the blood circulates in our bodies in various directions, at various speeds, and in different conditions. This artwork explores the line segmentation and the structure of veins and compares them to roads in the real world. The participants can also export the data and keep them as a personalized souvenir (3d printed sculptures) as part of the artistic experience. Through this project, users can explore the correlation between individuals and environments using the hidden patterns under the skin and vein recognition techniques and image processing. This project also has the potential to lead the way in the interpretation of complicated datasets while providing aesthetically beautiful and mesmerizing visualizations.","PeriodicalId":376476,"journal":{"name":"SIGGRAPH Asia 2020 Art Gallery","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130603747","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
chameleon change the color of skin according to the surrounding environment for hiding the body, can not only avoid predators, and also confuse their prey. I use against neural network, I input a lot of images of chameleon various parts to the machine learning algorithm, the neural network generated the color-changing chameleon skin based on these pictures. Artificial-intelligence one day can be like a chameleon hide himself with pixel camouflage? Work is divided into two versions, image and image + interaction: in the interactive version, the color of the chameleon can change color according to the color of the catch by the electron trap.
{"title":"Chameleon","authors":"W. Gao","doi":"10.1145/3414686.3427129","DOIUrl":"https://doi.org/10.1145/3414686.3427129","url":null,"abstract":"chameleon change the color of skin according to the surrounding environment for hiding the body, can not only avoid predators, and also confuse their prey. I use against neural network, I input a lot of images of chameleon various parts to the machine learning algorithm, the neural network generated the color-changing chameleon skin based on these pictures. Artificial-intelligence one day can be like a chameleon hide himself with pixel camouflage? Work is divided into two versions, image and image + interaction: in the interactive version, the color of the chameleon can change color according to the color of the catch by the electron trap.","PeriodicalId":376476,"journal":{"name":"SIGGRAPH Asia 2020 Art Gallery","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116009242","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}