This interactive artwork uses augmented reality on mobile devices to portray the concept of bacteria is everywhere around us. The installation set up will show an aesthetic interactive design of common bacteria types found in every day's items. Interactive android application is built using Augmented Reality (AR) technology with a 3D display of the bacteria found each time the mobile devices scans an identified item that is commonly found around us. Users can manipulate the bacteria's behavior by simulating a different condition for the bacteria and visualize in 3D format of the aesthetic behavior of the bacteria. This project is suitable for digital advertising as well as an educational exhibition booth to attract the interest of the youngsters who are technology savvy. The overall installation utilizes the 3D concept and interactive manipulation of the controlling parameters to simulate bacteria's behavior and visualize in an aesthetic manner with 3D visual arts.
{"title":"The Invisible: Bacteria Everywhere","authors":"C. Wei, Wong Chee Onn","doi":"10.1145/3001773.3001819","DOIUrl":"https://doi.org/10.1145/3001773.3001819","url":null,"abstract":"This interactive artwork uses augmented reality on mobile devices to portray the concept of bacteria is everywhere around us. The installation set up will show an aesthetic interactive design of common bacteria types found in every day's items. Interactive android application is built using Augmented Reality (AR) technology with a 3D display of the bacteria found each time the mobile devices scans an identified item that is commonly found around us. Users can manipulate the bacteria's behavior by simulating a different condition for the bacteria and visualize in 3D format of the aesthetic behavior of the bacteria. This project is suitable for digital advertising as well as an educational exhibition booth to attract the interest of the youngsters who are technology savvy. The overall installation utilizes the 3D concept and interactive manipulation of the controlling parameters to simulate bacteria's behavior and visualize in an aesthetic manner with 3D visual arts.","PeriodicalId":127730,"journal":{"name":"Proceedings of the 13th International Conference on Advances in Computer Entertainment Technology","volume":"112 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127140512","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In general, routine every-day work tends to be boring and monotonous, e.g. housework. Work songs have been written and sung by workers to reduce their labor load. In addition, text-to-song synthesizer software such as Yamaha's VOCALOID™ is commonly used by a wide variety of computer music creators. In this paper, we propose a real-time music synthesizer named "conteXinger". This system sings lyrics based on the listener's context, including the use of home appliances, e.g. vacuum cleaner, refrigerator, microwave oven, or dish washer, and Internet information, e.g. SNS messages, Web news, and weather reports. By presenting the synthesized music to a user through a home audio system or headphones, our system entertains users who may be bored due to their everyday work routines.
{"title":"conteXinger: A Context-aware Song Generator to Enrich Daily Lives","authors":"Ayano Nishimura, I. Siio","doi":"10.1145/3001773.3014350","DOIUrl":"https://doi.org/10.1145/3001773.3014350","url":null,"abstract":"In general, routine every-day work tends to be boring and monotonous, e.g. housework. Work songs have been written and sung by workers to reduce their labor load. In addition, text-to-song synthesizer software such as Yamaha's VOCALOID™ is commonly used by a wide variety of computer music creators. In this paper, we propose a real-time music synthesizer named \"conteXinger\". This system sings lyrics based on the listener's context, including the use of home appliances, e.g. vacuum cleaner, refrigerator, microwave oven, or dish washer, and Internet information, e.g. SNS messages, Web news, and weather reports. By presenting the synthesized music to a user through a home audio system or headphones, our system entertains users who may be bored due to their everyday work routines.","PeriodicalId":127730,"journal":{"name":"Proceedings of the 13th International Conference on Advances in Computer Entertainment Technology","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124945990","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
MoCHA (Monitoring Cognitive Health using Apps) is a set of tablet-based games designed to provide convenient, low-stress, affordable monitoring of cognitive health for elders at risk of developing Alzheimer's disease. Conducting psychological measurement via gameplay poses unique game-design challenges, and there are additional factors to consider when designing games for non-gamer elders who may be, or become, cognitively impaired. In this paper we briefly describe the MoCHA system, identify key design challenges, and show how specific features of the game contribute to meeting these challenges.
{"title":"MoCHA: Designing Games to Monitor Cognitive Health in Elders at Risk for Alzheimer's Disease","authors":"Ilya Farber, Karl C. Fua, Swati Gupta, D. Pautler","doi":"10.1145/3001773.3001818","DOIUrl":"https://doi.org/10.1145/3001773.3001818","url":null,"abstract":"MoCHA (Monitoring Cognitive Health using Apps) is a set of tablet-based games designed to provide convenient, low-stress, affordable monitoring of cognitive health for elders at risk of developing Alzheimer's disease. Conducting psychological measurement via gameplay poses unique game-design challenges, and there are additional factors to consider when designing games for non-gamer elders who may be, or become, cognitively impaired. In this paper we briefly describe the MoCHA system, identify key design challenges, and show how specific features of the game contribute to meeting these challenges.","PeriodicalId":127730,"journal":{"name":"Proceedings of the 13th International Conference on Advances in Computer Entertainment Technology","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125931570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hao Yin, Keiko Yamamoto, Itaru Kuramoto, Y. Tsujino
Together with the development of Web 2.0 technology, we need to focus more on large-scale knowledge sharing. Owing to the characteristics of the long-tail distribution generally observed in conventional knowledge-sharing systems, knowledge providers are sometimes unwilling to share knowledge when they are unable to obtain any feedback. However, when knowledge acquirers read long-tail knowledge, they may easily feel bored because such knowledge rarely matches their own interests. To solve these problems, we proposed and implemented a gamified knowledge-sharing system called "Light Quest," which can share long-tail knowledge in an entertaining way. In this system, the users can briefly describe knowledge they would like to share using a "Tips Card," and evaluate the cards written by other users, by choosing the best card from a set of cards that are randomly selected from a knowledge pool. The results of a four-week experimental evaluation indicated that the system could increase users' motivation to both provide and acquire knowledge.
{"title":"Light Quest: A Gamified Knowledge-sharing System to Increase Motivation to Provide Long-tail Knowledge","authors":"Hao Yin, Keiko Yamamoto, Itaru Kuramoto, Y. Tsujino","doi":"10.1145/3001773.3001795","DOIUrl":"https://doi.org/10.1145/3001773.3001795","url":null,"abstract":"Together with the development of Web 2.0 technology, we need to focus more on large-scale knowledge sharing. Owing to the characteristics of the long-tail distribution generally observed in conventional knowledge-sharing systems, knowledge providers are sometimes unwilling to share knowledge when they are unable to obtain any feedback. However, when knowledge acquirers read long-tail knowledge, they may easily feel bored because such knowledge rarely matches their own interests. To solve these problems, we proposed and implemented a gamified knowledge-sharing system called \"Light Quest,\" which can share long-tail knowledge in an entertaining way. In this system, the users can briefly describe knowledge they would like to share using a \"Tips Card,\" and evaluate the cards written by other users, by choosing the best card from a set of cards that are randomly selected from a knowledge pool. The results of a four-week experimental evaluation indicated that the system could increase users' motivation to both provide and acquire knowledge.","PeriodicalId":127730,"journal":{"name":"Proceedings of the 13th International Conference on Advances in Computer Entertainment Technology","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129159292","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose Human Coded Orchestra, a new approach to enable a group of individuals to sing in harmony by using computed directional speakers. The possibilities of musical performance by an untrained group have been explored in the fields of science and art. However, previous work has rarely proceeded beyond simple rhythm-based music and failed to achieve musical complexity. Human Coded Orchestra employs a number of directional speakers, each set at a different pitch, enabling them to deliver different pitches to each participant to sing to, according to their positions. Experiments demonstrated that participants succeeded in singing in harmony extemporaneously, and they reported that they enjoyed both the experience of singing and the feeling that they were able to participate in an activity with others. Our system does not require preparation on the part of singers, which opens up the possibility of practical application in the area of interactive performance.
{"title":"Human Coded Orchestra: a System for Extemporary Group Singing Performance","authors":"Yuzu Saijo, Kenta Suzuki, Nobutaka Ito, Amy Koike, Yoichi Ochiai","doi":"10.1145/3001773.3001811","DOIUrl":"https://doi.org/10.1145/3001773.3001811","url":null,"abstract":"We propose Human Coded Orchestra, a new approach to enable a group of individuals to sing in harmony by using computed directional speakers. The possibilities of musical performance by an untrained group have been explored in the fields of science and art. However, previous work has rarely proceeded beyond simple rhythm-based music and failed to achieve musical complexity. Human Coded Orchestra employs a number of directional speakers, each set at a different pitch, enabling them to deliver different pitches to each participant to sing to, according to their positions. Experiments demonstrated that participants succeeded in singing in harmony extemporaneously, and they reported that they enjoyed both the experience of singing and the feeling that they were able to participate in an activity with others. Our system does not require preparation on the part of singers, which opens up the possibility of practical application in the area of interactive performance.","PeriodicalId":127730,"journal":{"name":"Proceedings of the 13th International Conference on Advances in Computer Entertainment Technology","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124627803","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Virtual reality (VR) enables unusual experiences, including the physically impossible, in an immersive environment. As new media such as VR are developed, designers tend to remediate aspects from previous media, but not every aspect fits. Several areas in VR design warrant scientific investigation in that regard. This paper specifically addresses transitioning between environments: when transitioning in a virtual world, will camera movements made simultaneously and in sync with movements from the user produce a preferred scene transition experience compared to virtual camera movement that is less directly coupled? We report on a within-subject experiment where participants were tasked with transitioning between different environments. One set of transitions required the full physical motion of the user to complete, for the other set completing the transition was triggered after a part of the physical movement was performed up to a threshold. Results showed a clear preference for the second variant and thus for less control over the virtual camera.
{"title":"You're the Camera!: Physical Movements For Transitioning Between Environments in VR","authors":"Josh Kohn, Stefan Rank","doi":"10.1145/3001773.3001824","DOIUrl":"https://doi.org/10.1145/3001773.3001824","url":null,"abstract":"Virtual reality (VR) enables unusual experiences, including the physically impossible, in an immersive environment. As new media such as VR are developed, designers tend to remediate aspects from previous media, but not every aspect fits. Several areas in VR design warrant scientific investigation in that regard. This paper specifically addresses transitioning between environments: when transitioning in a virtual world, will camera movements made simultaneously and in sync with movements from the user produce a preferred scene transition experience compared to virtual camera movement that is less directly coupled? We report on a within-subject experiment where participants were tasked with transitioning between different environments. One set of transitions required the full physical motion of the user to complete, for the other set completing the transition was triggered after a part of the physical movement was performed up to a threshold. Results showed a clear preference for the second variant and thus for less control over the virtual camera.","PeriodicalId":127730,"journal":{"name":"Proceedings of the 13th International Conference on Advances in Computer Entertainment Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129898503","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Guilherme Fião, T. Romão, N. Correia, Pedro Centieiro, A. E. Dias
With the widespread use of social networks, sharing emotions has become easy and accessible to anyone with a smartphone or a computer. In this paper, we present a system capable of generating automatic highlight videos of sports match TV broadcasts based on the emotions shared by the spectators during the match, the audio, the analysis of the movement and manual annotations (when available). Our system also allows for the user to query the video to extract specific clips, such as dangerous attacking plays of a certain team. Our preliminary results were encouraging, showing that the creation of these summaries can be successfully done in very short time.
{"title":"Automatic Generation of Sport Video Highlights Based on Fan's Emotions and Content","authors":"Guilherme Fião, T. Romão, N. Correia, Pedro Centieiro, A. E. Dias","doi":"10.1145/3001773.3001802","DOIUrl":"https://doi.org/10.1145/3001773.3001802","url":null,"abstract":"With the widespread use of social networks, sharing emotions has become easy and accessible to anyone with a smartphone or a computer. In this paper, we present a system capable of generating automatic highlight videos of sports match TV broadcasts based on the emotions shared by the spectators during the match, the audio, the analysis of the movement and manual annotations (when available). Our system also allows for the user to query the video to extract specific clips, such as dangerous attacking plays of a certain team. Our preliminary results were encouraging, showing that the creation of these summaries can be successfully done in very short time.","PeriodicalId":127730,"journal":{"name":"Proceedings of the 13th International Conference on Advances in Computer Entertainment Technology","volume":"770 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132970351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Taifūrin is a novel typhoon early warning system that informs people when a typhoon is approaching. We combined a traditional Japanese wind-chime (known as fūrin) with near real-time remotely-sensed typhoon data and electronic components connected to a single-circuit board computer to create a unique IoT (Internet of Things) device in the form of a simple art installation. In doing so, we aimed to combine modern interactivity with a traditional sense of Japanese aesthetics, known as wabi-sabi.
{"title":"Taifūrin: Wind-Chime Installation As A Novel Typhoon Early Warning System","authors":"Paul Haimes, Tetsuaki Baba, Kumiko Kushiyama","doi":"10.1145/3001773.3001830","DOIUrl":"https://doi.org/10.1145/3001773.3001830","url":null,"abstract":"Taifūrin is a novel typhoon early warning system that informs people when a typhoon is approaching. We combined a traditional Japanese wind-chime (known as fūrin) with near real-time remotely-sensed typhoon data and electronic components connected to a single-circuit board computer to create a unique IoT (Internet of Things) device in the form of a simple art installation. In doing so, we aimed to combine modern interactivity with a traditional sense of Japanese aesthetics, known as wabi-sabi.","PeriodicalId":127730,"journal":{"name":"Proceedings of the 13th International Conference on Advances in Computer Entertainment Technology","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131875798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
One of the known challenges in Children Robot Interaction (cHRI) is to sustain children's engagement for long-term interactions with robots. Researchers have hypothesised that robots that can adapt to children's affective states, and can also learn from the environment, resulting in sustained engagement during cHRI. In this paper, we report on a study conducted with three groups of children who played a snakes and ladders game with the NAO robot. The NAO performed 1) Game based adaptations, 2) Emotion based adaptations and 3) Memory based adaptation. The purpose of this study was to find which particular condition resulted in maintaining engagement over a certain period of time. Our results show that adaptations performed by the robot, in general, were able to maintain long-term engagement. However, we did not find any significant effect of one adaptation over another on engagement, social presence and perceived support.
{"title":"Effect of Different Adaptations by a Robot on Children's Long-term Engagement: An Exploratory Study","authors":"M. Ahmad, Omar Mubin, Joanne Orlando","doi":"10.1145/3001773.3001803","DOIUrl":"https://doi.org/10.1145/3001773.3001803","url":null,"abstract":"One of the known challenges in Children Robot Interaction (cHRI) is to sustain children's engagement for long-term interactions with robots. Researchers have hypothesised that robots that can adapt to children's affective states, and can also learn from the environment, resulting in sustained engagement during cHRI. In this paper, we report on a study conducted with three groups of children who played a snakes and ladders game with the NAO robot. The NAO performed 1) Game based adaptations, 2) Emotion based adaptations and 3) Memory based adaptation. The purpose of this study was to find which particular condition resulted in maintaining engagement over a certain period of time. Our results show that adaptations performed by the robot, in general, were able to maintain long-term engagement. However, we did not find any significant effect of one adaptation over another on engagement, social presence and perceived support.","PeriodicalId":127730,"journal":{"name":"Proceedings of the 13th International Conference on Advances in Computer Entertainment Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115759814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hajime Katsumoto, H. Kajita, Naoya Koizumi, T. Naemura
In face-to-face situations, tabletop displays have been used to share information among multiple users. The use of mid-air images is promising for using the space above a tabletop. However, in our previous research, the mid-air images on a tabletop were fixed in the depth direction. To design a digital PONG-like game, the mid-air image needs to move freely horizontally on the table. On the basis of this previous research, we designed a mid-air imaging tabletop display system to show a moving vertical mid-air image. However, the optical design of the system causes through light when moving an image, which is undesirable. This is due to the behavior of the imaging device/aerial-imaging plate. We propose an improved design of our mid-air imaging tabletop display system for playing face-to-face digital games. There were two requirements we had to meet to improve upon the previous system: moving a displayed dual-sided vertical mid-air image and blocking through light. To display a vertical mid-air image to two people, we use an optical imaging device and a dual-sided display, like the previous optical design. To move this image, an XY-table is placed
{"title":"HoVerTable PONG: Playing Face-to-face Game on Horizontal Tabletop with Moving Vertical Mid-air Image","authors":"Hajime Katsumoto, H. Kajita, Naoya Koizumi, T. Naemura","doi":"10.1145/3001773.3001820","DOIUrl":"https://doi.org/10.1145/3001773.3001820","url":null,"abstract":"In face-to-face situations, tabletop displays have been used to share information among multiple users. The use of mid-air images is promising for using the space above a tabletop. However, in our previous research, the mid-air images on a tabletop were fixed in the depth direction. To design a digital PONG-like game, the mid-air image needs to move freely horizontally on the table. On the basis of this previous research, we designed a mid-air imaging tabletop display system to show a moving vertical mid-air image. However, the optical design of the system causes through light when moving an image, which is undesirable. This is due to the behavior of the imaging device/aerial-imaging plate. We propose an improved design of our mid-air imaging tabletop display system for playing face-to-face digital games. There were two requirements we had to meet to improve upon the previous system: moving a displayed dual-sided vertical mid-air image and blocking through light. To display a vertical mid-air image to two people, we use an optical imaging device and a dual-sided display, like the previous optical design. To move this image, an XY-table is placed","PeriodicalId":127730,"journal":{"name":"Proceedings of the 13th International Conference on Advances in Computer Entertainment Technology","volume":"&NA; 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126046002","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}