Evelyn Florentine, M. A. Ang, S. Pendleton, Hans Andersen, M. Ang
In this paper, we describe methods of conveying perception information and motion intention from self driving vehicles to the surrounding environment. One method is by equipping autonomous vehicles with Light-Emitting Diode (LED) strips to convey perception information; typical pedestrian-driver acknowledgement is replaced by visual feedback via lights which change color to signal the presence of obstacles in the surrounding environment. Another method is by broadcasting audio cues of the vehicle's motion intention to the environment. The performance of the autonomous vehicles as social robots is improved by building trust and engagement with interacting pedestrians. The software and hardware systems are detailed, and a video demonstrates the working system in real application. Further extension of the work for multi-class mobility in human environments is discussed.
{"title":"Pedestrian Notification Methods in Autonomous Vehicles for Multi-Class Mobility-on-Demand Service","authors":"Evelyn Florentine, M. A. Ang, S. Pendleton, Hans Andersen, M. Ang","doi":"10.1145/2974804.2974833","DOIUrl":"https://doi.org/10.1145/2974804.2974833","url":null,"abstract":"In this paper, we describe methods of conveying perception information and motion intention from self driving vehicles to the surrounding environment. One method is by equipping autonomous vehicles with Light-Emitting Diode (LED) strips to convey perception information; typical pedestrian-driver acknowledgement is replaced by visual feedback via lights which change color to signal the presence of obstacles in the surrounding environment. Another method is by broadcasting audio cues of the vehicle's motion intention to the environment. The performance of the autonomous vehicles as social robots is improved by building trust and engagement with interacting pedestrians. The software and hardware systems are detailed, and a video demonstrates the working system in real application. Further extension of the work for multi-class mobility in human environments is discussed.","PeriodicalId":185756,"journal":{"name":"Proceedings of the Fourth International Conference on Human Agent Interaction","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132050044","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Singo Sawa, H. Kawashima, Kei Shimonishi, T. Matsuyama
Generation of natural human motion is one of key techniques for multimodal dialogue systems with a human-like avatar. In particular, natural and expressive lip motion synthesis is necessary to make conversation between a user and an avatar richer. However, such expressive lip motion is often difficult to be generated automatically because it can be changed depending on phonemic context and prosody. To address this difficulty, we introduce a novel motion generation method on the basis of the modulation of a set of dynamic models learned from neutral motion data. As a suitable model for lip motion generation, we adopt a hybrid dynamical system, which consists of linear dynamical systems for each motion unit and a symbolic automaton for switching between these units. We show that, from the viewpoint of control theory, it is possible to modulate linear dynamical systems for various types of motion. Early results demonstrate the applicability of the proposed method using lip motion synthesis for simple phoneme sequences.
{"title":"Modulating Dynamic Models for Lip Motion Generation","authors":"Singo Sawa, H. Kawashima, Kei Shimonishi, T. Matsuyama","doi":"10.1145/2974804.2980499","DOIUrl":"https://doi.org/10.1145/2974804.2980499","url":null,"abstract":"Generation of natural human motion is one of key techniques for multimodal dialogue systems with a human-like avatar. In particular, natural and expressive lip motion synthesis is necessary to make conversation between a user and an avatar richer. However, such expressive lip motion is often difficult to be generated automatically because it can be changed depending on phonemic context and prosody. To address this difficulty, we introduce a novel motion generation method on the basis of the modulation of a set of dynamic models learned from neutral motion data. As a suitable model for lip motion generation, we adopt a hybrid dynamical system, which consists of linear dynamical systems for each motion unit and a symbolic automaton for switching between these units. We show that, from the viewpoint of control theory, it is possible to modulate linear dynamical systems for various types of motion. Early results demonstrate the applicability of the proposed method using lip motion synthesis for simple phoneme sequences.","PeriodicalId":185756,"journal":{"name":"Proceedings of the Fourth International Conference on Human Agent Interaction","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127694826","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Humans can communicate because they adapt and adjust their behavior to each other. We hypothesize that developing a relationship with others requires coordinating the desire to communicate and that this coordination is related to agency identification. To model this initial phase of communication, we created an experimental environment to observe the interaction between a human and an abstract-shaped robot whose behavior, moving on the floor and rotating, was mapped by another human. The participants were required to verbalize what they were thinking or feeling while interacting with the robot. At present, we do not have a sufficient number of participants, and experiments and data analysis are ongoing. We must verify the effects of interaction patterns and inspect what type of action and reaction are regarded as signals that enhance interpersonal interaction.
{"title":"Process of Agency Identification Based on the Desire to Communicate in Embodied Interaction","authors":"Takafumi Sakamoto, Yugo Takeuchi","doi":"10.1145/2974804.2980518","DOIUrl":"https://doi.org/10.1145/2974804.2980518","url":null,"abstract":"Humans can communicate because they adapt and adjust their behavior to each other. We hypothesize that developing a relationship with others requires coordinating the desire to communicate and that this coordination is related to agency identification. To model this initial phase of communication, we created an experimental environment to observe the interaction between a human and an abstract-shaped robot whose behavior, moving on the floor and rotating, was mapped by another human. The participants were required to verbalize what they were thinking or feeling while interacting with the robot. At present, we do not have a sufficient number of participants, and experiments and data analysis are ongoing. We must verify the effects of interaction patterns and inspect what type of action and reaction are regarded as signals that enhance interpersonal interaction.","PeriodicalId":185756,"journal":{"name":"Proceedings of the Fourth International Conference on Human Agent Interaction","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129811045","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In early days, robots often occupied tightly controlled environments, for example, factory floors, designed to segregate robots and humans for safety. Today robots "live" with humans, providing a variety of services at homes, in workplaces, or on the road. To become effective and trustworthy collaborators, robots must understand human intentions and act accordingly in response. One core challenge here is the inherent uncertainty in understanding intentions, as a result of the complexity and diversity of human behaviours. Robots must hedge against such uncertainties to achieve robust performance and sometimes actively elicit information in order to reduce uncertainty and ascertain human intentions. Our recent work explores planning and learning under uncertainty for human-robot interactive or collaborative tasks. It covers mathematical models for human intentions, planning algorithms that connect robot perception with decision making, and learning algorithms that enable robots to adapt to human preferences. The work, I hope, will spur greater interest towards principled approaches that integrate perception, planning, and learning for fluid human-robot collaboration.
{"title":"Robots in Harmony with Humans","authors":"David Hsu","doi":"10.1145/2974804.2993927","DOIUrl":"https://doi.org/10.1145/2974804.2993927","url":null,"abstract":"In early days, robots often occupied tightly controlled environments, for example, factory floors, designed to segregate robots and humans for safety. Today robots \"live\" with humans, providing a variety of services at homes, in workplaces, or on the road. To become effective and trustworthy collaborators, robots must understand human intentions and act accordingly in response. One core challenge here is the inherent uncertainty in understanding intentions, as a result of the complexity and diversity of human behaviours. Robots must hedge against such uncertainties to achieve robust performance and sometimes actively elicit information in order to reduce uncertainty and ascertain human intentions. Our recent work explores planning and learning under uncertainty for human-robot interactive or collaborative tasks. It covers mathematical models for human intentions, planning algorithms that connect robot perception with decision making, and learning algorithms that enable robots to adapt to human preferences. The work, I hope, will spur greater interest towards principled approaches that integrate perception, planning, and learning for fluid human-robot collaboration.","PeriodicalId":185756,"journal":{"name":"Proceedings of the Fourth International Conference on Human Agent Interaction","volume":"145 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120872123","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nowadays, physical and intellectual labor can be substituted by robotics and information technology. However, emotional labor, which causes mental stress to the workers, has not been substituted yet. Therefore, we propose a method called emotional cyborg that substitutes human-emotional representation with the use of attachable devices. In this study, we used AgencyGlass, a device that substitutes the function of human eyes. We conducted a preliminary study to measure the task-processing time and the subject's cognitive load during the use of AgencyGlass. The result suggests that the AgencyGlass can perform joint attention similar to humans; however, its attentional shift is weaker than in humans.
{"title":"Evaluation of a Substitution Device for Emotional Labor by using Task-Processing Time and Cognitive Load","authors":"Takeomi Goto, Hirotaka Osawa","doi":"10.1145/2974804.2980517","DOIUrl":"https://doi.org/10.1145/2974804.2980517","url":null,"abstract":"Nowadays, physical and intellectual labor can be substituted by robotics and information technology. However, emotional labor, which causes mental stress to the workers, has not been substituted yet. Therefore, we propose a method called emotional cyborg that substitutes human-emotional representation with the use of attachable devices. In this study, we used AgencyGlass, a device that substitutes the function of human eyes. We conducted a preliminary study to measure the task-processing time and the subject's cognitive load during the use of AgencyGlass. The result suggests that the AgencyGlass can perform joint attention similar to humans; however, its attentional shift is weaker than in humans.","PeriodicalId":185756,"journal":{"name":"Proceedings of the Fourth International Conference on Human Agent Interaction","volume":"361 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132207849","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In human-human interaction, peer pressure is a major social influence on people's thoughts, feelings, and behaviors. The larger the group of people, the more social influence it exerts. In this paper, we investigate whether multiple robots and their synchronized behaviors exert peer pressure on people, as in human groups. We developed a multiple robot controller system that enables robots to perform precise synchronization. In the experiment, we prepared a setting that resembled previous experiments that investigated peer pressure between people and robots. The participants answered questions after hearing the robots' answers, only some of which were incorrect. Our experiment results showed that the influence of the synchronized multiple robots increased the error rates of the participants, but we found no significant effects toward conformity.
{"title":"Do Synchronized Multiple Robots Exert Peer Pressure?","authors":"M. Shiomi, N. Hagita","doi":"10.1145/2974804.2974808","DOIUrl":"https://doi.org/10.1145/2974804.2974808","url":null,"abstract":"In human-human interaction, peer pressure is a major social influence on people's thoughts, feelings, and behaviors. The larger the group of people, the more social influence it exerts. In this paper, we investigate whether multiple robots and their synchronized behaviors exert peer pressure on people, as in human groups. We developed a multiple robot controller system that enables robots to perform precise synchronization. In the experiment, we prepared a setting that resembled previous experiments that investigated peer pressure between people and robots. The participants answered questions after hearing the robots' answers, only some of which were incorrect. Our experiment results showed that the influence of the synchronized multiple robots increased the error rates of the participants, but we found no significant effects toward conformity.","PeriodicalId":185756,"journal":{"name":"Proceedings of the Fourth International Conference on Human Agent Interaction","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130911833","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Session details: Main Track Session III: Modelling Interactions","authors":"Sho Sakurai, Yugo Takeuchi","doi":"10.1145/3257125","DOIUrl":"https://doi.org/10.1145/3257125","url":null,"abstract":"","PeriodicalId":185756,"journal":{"name":"Proceedings of the Fourth International Conference on Human Agent Interaction","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128471360","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This poster presents simulation of a tele-operated shared controlled robot task that is integrated with a generic simulator of RADOE (Robot Application Development and Operating Environment). A customized and extendable Rviz interface plugin is designed and applied to import models, do simulation, enable real robot operation, and communicate with other projects by clicking related buttons. In the simulation process, the robot model in the simulator is controlled and visualized by human operator using an Omega 7 haptic device and automatic method, i.e., the shared control. After the simulation is conducted and satisfied, the system will send back a signal to the real robot system to execute the operation task; otherwise, simulation of the shared control process will be continued until satisfaction. We provide a simulation of a drawing task on the surface of a sphere.
{"title":"Simulation of a Tele-operated Task under Human-Robot Shared Control","authors":"Longjiang Zhou, K. Tee, Zhiyong Huang","doi":"10.1145/2974804.2983316","DOIUrl":"https://doi.org/10.1145/2974804.2983316","url":null,"abstract":"This poster presents simulation of a tele-operated shared controlled robot task that is integrated with a generic simulator of RADOE (Robot Application Development and Operating Environment). A customized and extendable Rviz interface plugin is designed and applied to import models, do simulation, enable real robot operation, and communicate with other projects by clicking related buttons. In the simulation process, the robot model in the simulator is controlled and visualized by human operator using an Omega 7 haptic device and automatic method, i.e., the shared control. After the simulation is conducted and satisfied, the system will send back a signal to the real robot system to execute the operation task; otherwise, simulation of the shared control process will be continued until satisfaction. We provide a simulation of a drawing task on the surface of a sphere.","PeriodicalId":185756,"journal":{"name":"Proceedings of the Fourth International Conference on Human Agent Interaction","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126805560","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In anthropomorphic design, there has been increasing interests in using kinetic motion and shape changing of the physical objects as a medium to communicate with people. In this paper, we introduce an interactive installation named Whispering Bubbles to explore anthropomorphism through shape-changing interfaces embedded in a physical space. It aims to provide a poetic place for people to whisper with the organically shaped objects (bubbles), to help people release mental stress in their modern lives. When a person approaches bubbles within a given distance, slight up-and-down movements of the bubbles will be activated by infrared sensors embedded in the space; when a person stands nearby a bubble and whispers to it, the bubble will "hear" with its sound detector and be triggered to bend towards the person, indicating engagement in listening. A scale model is implemented to explore and demonstrate interactions.
{"title":"Whispering Bubbles: Exploring Anthropomorphism through Shape-Changing Interfaces","authors":"S. Qiu, S. A. Anas, Jun Hu","doi":"10.1145/2974804.2980481","DOIUrl":"https://doi.org/10.1145/2974804.2980481","url":null,"abstract":"In anthropomorphic design, there has been increasing interests in using kinetic motion and shape changing of the physical objects as a medium to communicate with people. In this paper, we introduce an interactive installation named Whispering Bubbles to explore anthropomorphism through shape-changing interfaces embedded in a physical space. It aims to provide a poetic place for people to whisper with the organically shaped objects (bubbles), to help people release mental stress in their modern lives. When a person approaches bubbles within a given distance, slight up-and-down movements of the bubbles will be activated by infrared sensors embedded in the space; when a person stands nearby a bubble and whispers to it, the bubble will \"hear\" with its sound detector and be triggered to bend towards the person, indicating engagement in listening. A scale model is implemented to explore and demonstrate interactions.","PeriodicalId":185756,"journal":{"name":"Proceedings of the Fourth International Conference on Human Agent Interaction","volume":"72 7","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120808216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
School libraries are required to promote the habit of reading books in elementary school children. It is necessary to cultivate children's interest in books to achieve this goal. In this paper, we propose a user-generated agent (UGA) that introduces books. Elementary school children can program the behavior of the UGA themselves. The UGA not only cultivates the children's interest in the book introduced by the agent, but also their motivation for the presentation by allowing them to design the contents of the agent. We promote the habit of reading by allowing the children to modify the agent's design and giving them the opportunity to refine their ability to promote books.
{"title":"User Generated Agent: Designable Book Recommendation Robot Programmed by Children","authors":"Yusuke Kudo, Wataru Kayano, Takuya Sato, Hirotaka Osawa","doi":"10.1145/2974804.2980489","DOIUrl":"https://doi.org/10.1145/2974804.2980489","url":null,"abstract":"School libraries are required to promote the habit of reading books in elementary school children. It is necessary to cultivate children's interest in books to achieve this goal. In this paper, we propose a user-generated agent (UGA) that introduces books. Elementary school children can program the behavior of the UGA themselves. The UGA not only cultivates the children's interest in the book introduced by the agent, but also their motivation for the presentation by allowing them to design the contents of the agent. We promote the habit of reading by allowing the children to modify the agent's design and giving them the opportunity to refine their ability to promote books.","PeriodicalId":185756,"journal":{"name":"Proceedings of the Fourth International Conference on Human Agent Interaction","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114910812","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}