Seiya Aoki, Yusuke Yamada, Santa Naruse, Reo Anzai, Aina Ono
{"title":"holarchy","authors":"Seiya Aoki, Yusuke Yamada, Santa Naruse, Reo Anzai, Aina Ono","doi":"10.1145/3414686.3427159","DOIUrl":null,"url":null,"abstract":"This work is an online installation that creates a new audio-visual using automatic video selection with deep learning. Video expression in the audio-visual and DJ+VJ, where sound and images coexist together, has been based on sampling methods by combining clips that already exist, generative methods that are computed in real time by computer, and the use of the sound of the phenomenon and the situation itself. Its visual effects have extended music and given it new meanings. However, in all of these methods, the selection of the video and the program itself was premised on the artist's arbitrary decision to match the music. This work is an online installation that eliminates the arbitrariness of the artist, creating a new audio-visual work by comparing in the same space the feature of the music and the feature of a number of images selected by the artist beforehand and selecting them automatically. In this work, the sounds of the youtube video selected by the viewer are separated every few seconds, and the closest video is selected by comparing these features with the features of countless short clips of movies and videos prepared in advance in the same space. This video selection method reconstructs the mapping relationship that artists have constructed so far between video and sound using deep learning, and suggests the possibility of possible correspondences. In addition, unconnected scenes from different films and images that have never been connected before become a single image, and emerge as a whole, and the viewer finds a story in the relationship between them. With this work, audio-visual and DJ+VJ expression is freed from the arbitrary decision, and a new perspective is given to the artists.","PeriodicalId":376476,"journal":{"name":"SIGGRAPH Asia 2020 Art Gallery","volume":"24 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"holarchy\",\"authors\":\"Seiya Aoki, Yusuke Yamada, Santa Naruse, Reo Anzai, Aina Ono\",\"doi\":\"10.1145/3414686.3427159\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This work is an online installation that creates a new audio-visual using automatic video selection with deep learning. Video expression in the audio-visual and DJ+VJ, where sound and images coexist together, has been based on sampling methods by combining clips that already exist, generative methods that are computed in real time by computer, and the use of the sound of the phenomenon and the situation itself. Its visual effects have extended music and given it new meanings. However, in all of these methods, the selection of the video and the program itself was premised on the artist's arbitrary decision to match the music. This work is an online installation that eliminates the arbitrariness of the artist, creating a new audio-visual work by comparing in the same space the feature of the music and the feature of a number of images selected by the artist beforehand and selecting them automatically. In this work, the sounds of the youtube video selected by the viewer are separated every few seconds, and the closest video is selected by comparing these features with the features of countless short clips of movies and videos prepared in advance in the same space. This video selection method reconstructs the mapping relationship that artists have constructed so far between video and sound using deep learning, and suggests the possibility of possible correspondences. In addition, unconnected scenes from different films and images that have never been connected before become a single image, and emerge as a whole, and the viewer finds a story in the relationship between them. With this work, audio-visual and DJ+VJ expression is freed from the arbitrary decision, and a new perspective is given to the artists.\",\"PeriodicalId\":376476,\"journal\":{\"name\":\"SIGGRAPH Asia 2020 Art Gallery\",\"volume\":\"24 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-12-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"SIGGRAPH Asia 2020 Art Gallery\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3414686.3427159\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"SIGGRAPH Asia 2020 Art Gallery","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3414686.3427159","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
This work is an online installation that creates a new audio-visual using automatic video selection with deep learning. Video expression in the audio-visual and DJ+VJ, where sound and images coexist together, has been based on sampling methods by combining clips that already exist, generative methods that are computed in real time by computer, and the use of the sound of the phenomenon and the situation itself. Its visual effects have extended music and given it new meanings. However, in all of these methods, the selection of the video and the program itself was premised on the artist's arbitrary decision to match the music. This work is an online installation that eliminates the arbitrariness of the artist, creating a new audio-visual work by comparing in the same space the feature of the music and the feature of a number of images selected by the artist beforehand and selecting them automatically. In this work, the sounds of the youtube video selected by the viewer are separated every few seconds, and the closest video is selected by comparing these features with the features of countless short clips of movies and videos prepared in advance in the same space. This video selection method reconstructs the mapping relationship that artists have constructed so far between video and sound using deep learning, and suggests the possibility of possible correspondences. In addition, unconnected scenes from different films and images that have never been connected before become a single image, and emerge as a whole, and the viewer finds a story in the relationship between them. With this work, audio-visual and DJ+VJ expression is freed from the arbitrary decision, and a new perspective is given to the artists.