{"title":"Session details: Main Track Session VII: Communication Cues","authors":"K. Terada, Tomo Wantanabe","doi":"10.1145/3257129","DOIUrl":"https://doi.org/10.1145/3257129","url":null,"abstract":"","PeriodicalId":185756,"journal":{"name":"Proceedings of the Fourth International Conference on Human Agent Interaction","volume":"171 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122833056","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vanessa Lim, Hui Shan Ang, Estelle Lee, Boon Pang Lim
Singapore Hokkien (SH) is the most commonly spoken non-Mandarin Chinese dialect in Singapore. It is an important language for many members of Singapore's pioneer generation, but much less so for the younger generation who prefer English. In recent years, the greying of this demographic has placed an increasing demand on for assistive devices to support them. We report ongoing efforts to build limited-vocabulary speech recognition, with the eventual goal of a conversational voice agent in SH that can support applications in home-automation or in-hospital use case scenarios. This process is challenging as sizeable SH speech corpora do not yet exist, and SH is sufficiently different from existing Mandarin or Minnan such that other corpora cannot be directly used. We document our efforts at building language resources -- audio corpora, pronunciation lexicons -- and present some preliminary findings on multilingual training.
{"title":"Towards an Interactive Voice Agent for Singapore Hokkien","authors":"Vanessa Lim, Hui Shan Ang, Estelle Lee, Boon Pang Lim","doi":"10.1145/2974804.2980495","DOIUrl":"https://doi.org/10.1145/2974804.2980495","url":null,"abstract":"Singapore Hokkien (SH) is the most commonly spoken non-Mandarin Chinese dialect in Singapore. It is an important language for many members of Singapore's pioneer generation, but much less so for the younger generation who prefer English. In recent years, the greying of this demographic has placed an increasing demand on for assistive devices to support them. We report ongoing efforts to build limited-vocabulary speech recognition, with the eventual goal of a conversational voice agent in SH that can support applications in home-automation or in-hospital use case scenarios. This process is challenging as sizeable SH speech corpora do not yet exist, and SH is sufficiently different from existing Mandarin or Minnan such that other corpora cannot be directly used. We document our efforts at building language resources -- audio corpora, pronunciation lexicons -- and present some preliminary findings on multilingual training.","PeriodicalId":185756,"journal":{"name":"Proceedings of the Fourth International Conference on Human Agent Interaction","volume":"141 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115788548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this study, we designed a strategy using speech and head gaze and a set of voice commands for cooperative conveyance by a human and a robot. In the designed strategy, the human turns his or her head to face the robot and gives one of twelve spoken commands in the set. In order to start and stop the robot moving, the human sends nonverbal cues by changing his or her point of gaze. We developed a mobile robot that interacts with a human based on the strategy and the command set, which was evaluated with ten young novices. The results of this study imply that most young people can quickly learn how to cooperate with our robot to move objects using speech and head gaze.
{"title":"Human-Robot Cooperative Conveyance Using Speech and Head Gaze","authors":"Tetsushi Oka, Shoichi Uchino","doi":"10.1145/2974804.2980486","DOIUrl":"https://doi.org/10.1145/2974804.2980486","url":null,"abstract":"In this study, we designed a strategy using speech and head gaze and a set of voice commands for cooperative conveyance by a human and a robot. In the designed strategy, the human turns his or her head to face the robot and gives one of twelve spoken commands in the set. In order to start and stop the robot moving, the human sends nonverbal cues by changing his or her point of gaze. We developed a mobile robot that interacts with a human based on the strategy and the command set, which was evaluated with ten young novices. The results of this study imply that most young people can quickly learn how to cooperate with our robot to move objects using speech and head gaze.","PeriodicalId":185756,"journal":{"name":"Proceedings of the Fourth International Conference on Human Agent Interaction","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123762584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mitsuhiko Kimoto, T. Iio, M. Shiomi, I. Tanev, K. Shimohara, N. Hagita
The recognition of an indicated object by an interacting person is an essential function for a robot that acts in daily environments. To improve recognition accuracy, clarifying the goal of the indicating behaviors is needed. For this purpose, we experimentally compared two kinds of interaction strategies: a robot that explicitly provides instructions to people about how to refer to objects or a robot that implicitly aligns with the people's indicating behaviors. Even though our results showed that participants evaluated the implicit approach to be more natural than the explicit approach, the recognition performances of the two approaches were not significantly different.
{"title":"Alignment Approach Comparison between Implicit and Explicit Suggestions in Object Reference Conversations","authors":"Mitsuhiko Kimoto, T. Iio, M. Shiomi, I. Tanev, K. Shimohara, N. Hagita","doi":"10.1145/2974804.2974814","DOIUrl":"https://doi.org/10.1145/2974804.2974814","url":null,"abstract":"The recognition of an indicated object by an interacting person is an essential function for a robot that acts in daily environments. To improve recognition accuracy, clarifying the goal of the indicating behaviors is needed. For this purpose, we experimentally compared two kinds of interaction strategies: a robot that explicitly provides instructions to people about how to refer to objects or a robot that implicitly aligns with the people's indicating behaviors. Even though our results showed that participants evaluated the implicit approach to be more natural than the explicit approach, the recognition performances of the two approaches were not significantly different.","PeriodicalId":185756,"journal":{"name":"Proceedings of the Fourth International Conference on Human Agent Interaction","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126296897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Session details: Main Track Session VI: Human Characteristics","authors":"Hideyuki Nakanishi, Andreea Niculescu","doi":"10.1145/3257128","DOIUrl":"https://doi.org/10.1145/3257128","url":null,"abstract":"","PeriodicalId":185756,"journal":{"name":"Proceedings of the Fourth International Conference on Human Agent Interaction","volume":"8 10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129308933","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In reality, shadows are usually natural and unintentional. In virtual reality, however, they play an important role in three-dimensional effects and the perceived reality of the virtual space. An avatar's shadow can have interactive effects with the avatar itself in the virtual space. In this study, we develop an embodied avatar system using avatar-shadow color expressions with an interaction-activated communication model. This model is based on the heat conduction equation in heat-transfer engineering, and has been developed to enhance empathy during embodied interaction in avatar-mediated communication. A communication experiment is performed with 12 pairs of participants to confirm the effectiveness of the system. The results of the sensory evaluation show that interaction activation is visualized by changing avatar-shadow color.
{"title":"Development of an Embodied Avatar System using Avatar-Shadow's Color Expressions with an Interaction-activated Communication Model","authors":"Yutaka Ishii, Tomio Watanabe, Yoshihiro Sejima","doi":"10.1145/2974804.2980487","DOIUrl":"https://doi.org/10.1145/2974804.2980487","url":null,"abstract":"In reality, shadows are usually natural and unintentional. In virtual reality, however, they play an important role in three-dimensional effects and the perceived reality of the virtual space. An avatar's shadow can have interactive effects with the avatar itself in the virtual space. In this study, we develop an embodied avatar system using avatar-shadow color expressions with an interaction-activated communication model. This model is based on the heat conduction equation in heat-transfer engineering, and has been developed to enhance empathy during embodied interaction in avatar-mediated communication. A communication experiment is performed with 12 pairs of participants to confirm the effectiveness of the system. The results of the sensory evaluation show that interaction activation is visualized by changing avatar-shadow color.","PeriodicalId":185756,"journal":{"name":"Proceedings of the Fourth International Conference on Human Agent Interaction","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115725999","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We developed a smartphone application that detects users' affect and provides personalized positive psychology interventions in order to enhance users' psychological wellbeing. Users' emotional states were measured by analyzing facial expressions and the sentiment of SMS messages. A virtual character in the application prompted users to verbally journal about their day by providing three positive psychology interventions. The system used a Markov Decision Process (MDP) model and a State-Action-Reward-State-Action (SARSA) algorithm to learn users' preferences about the positive psychology interventions. Nine participants were recruited for an experimental study to test the application. They used it daily for three weeks. The interactive journaling activity increased participants' arousal and valence levels immediately following each interaction, and we saw a trend toward improved self-acceptance levels over the three week period. The interaction duration increased significantly throughout the study as well. The qualitative analysis on journal entries showed that the application users explored and reflected on various aspects of themselves by looking at daily events, and found novel appreciation for and meanings in their daily routine.
{"title":"Improving Smartphone Users' Affect and Wellbeing with Personalized Positive Psychology Interventions","authors":"Sooyeon Jeong, C. Breazeal","doi":"10.1145/2974804.2974831","DOIUrl":"https://doi.org/10.1145/2974804.2974831","url":null,"abstract":"We developed a smartphone application that detects users' affect and provides personalized positive psychology interventions in order to enhance users' psychological wellbeing. Users' emotional states were measured by analyzing facial expressions and the sentiment of SMS messages. A virtual character in the application prompted users to verbally journal about their day by providing three positive psychology interventions. The system used a Markov Decision Process (MDP) model and a State-Action-Reward-State-Action (SARSA) algorithm to learn users' preferences about the positive psychology interventions. Nine participants were recruited for an experimental study to test the application. They used it daily for three weeks. The interactive journaling activity increased participants' arousal and valence levels immediately following each interaction, and we saw a trend toward improved self-acceptance levels over the three week period. The interaction duration increased significantly throughout the study as well. The qualitative analysis on journal entries showed that the application users explored and reflected on various aspects of themselves by looking at daily events, and found novel appreciation for and meanings in their daily routine.","PeriodicalId":185756,"journal":{"name":"Proceedings of the Fourth International Conference on Human Agent Interaction","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115692894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In our research, we made a system in which a robot acquires motions from a human's motions and repeats the motion when the human communicates with the robot. By controlling the timing of the robot's imitation, we verified an effect of Mutual Adaptation between human and robot. As a result of the experiment in which we controlled the imitation timing, some experiment participants were influenced and changed their motion. We found in some cases the robot could affect human's motion by imitating that human's motion.
{"title":"Mutual Adaptation between a Human and a Robot Based on Timing Control of \"Sleep-time\"","authors":"M. Kitagawa, B. L. Evans, N. Munekata, T. Ono","doi":"10.1145/2974804.2980502","DOIUrl":"https://doi.org/10.1145/2974804.2980502","url":null,"abstract":"In our research, we made a system in which a robot acquires motions from a human's motions and repeats the motion when the human communicates with the robot. By controlling the timing of the robot's imitation, we verified an effect of Mutual Adaptation between human and robot. As a result of the experiment in which we controlled the imitation timing, some experiment participants were influenced and changed their motion. We found in some cases the robot could affect human's motion by imitating that human's motion.","PeriodicalId":185756,"journal":{"name":"Proceedings of the Fourth International Conference on Human Agent Interaction","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122468006","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As robotics has advanced, research on conveying a robot's emotional state to a person has become a hot topic. Most current studies are focused on interaction modalities such as facial expressions and natural language. Although many of the results seem to be promising, they suffer from high cost and technical difficulties. In this paper, we turn our attention to three other interaction modalities: color, sound, and vibration. Such modalities have the advantage of being simple, low cost, and intuitive. We conducted a pilot study to evaluate the effects of the three modalities on a human's emotional perception towards our robot Maru. Our result indicates that humans tend to interpret a robot's emotion as negative (angry in particular) when vibration and sound are used, while they interpret the emotion as relaxed when only color modality is used. In addition, the participants showed preference towards the robot when using all three modalities.
{"title":"Investigation on Effects of Color, Sound, and Vibration on Human's Emotional Perception","authors":"Sichao Song, S. Yamada","doi":"10.1145/2974804.2980497","DOIUrl":"https://doi.org/10.1145/2974804.2980497","url":null,"abstract":"As robotics has advanced, research on conveying a robot's emotional state to a person has become a hot topic. Most current studies are focused on interaction modalities such as facial expressions and natural language. Although many of the results seem to be promising, they suffer from high cost and technical difficulties. In this paper, we turn our attention to three other interaction modalities: color, sound, and vibration. Such modalities have the advantage of being simple, low cost, and intuitive. We conducted a pilot study to evaluate the effects of the three modalities on a human's emotional perception towards our robot Maru. Our result indicates that humans tend to interpret a robot's emotion as negative (angry in particular) when vibration and sound are used, while they interpret the emotion as relaxed when only color modality is used. In addition, the participants showed preference towards the robot when using all three modalities.","PeriodicalId":185756,"journal":{"name":"Proceedings of the Fourth International Conference on Human Agent Interaction","volume":"93 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114816942","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Autism spectrum disorder (ASD) are characterized by difficulties in sensory integration and a body image which differs from the normal, healthy one. In order to enable healthy people to experience and thus develop a deeper understanding of the different body image that people with developmental disorders possess the author is pursuing research into the simulation of sensory experiences common to ASD. This study focuses on a particular sensory characteristic of ASD where difficulty is experienced in locating the source of sound in an environment and the development of a device for simulating the sense of hearing experienced in such a disorder. Workshops for children were carried out using the developed device, with interviews indicating that the majority of participants experienced feelings of ambiguity in relation to their own senses. Such a feeling in one's sensory boundaries is a phenomenon which is common in ASD research. It was concluded that the device developed in this study allowed people to vicariously experience the senses of people with ASD.
{"title":"Ear Ball for Empathy: To Realize the Sensory Experience of People with Autism Apectrum Disorder","authors":"T. Murakami","doi":"10.1145/2974804.2980516","DOIUrl":"https://doi.org/10.1145/2974804.2980516","url":null,"abstract":"Autism spectrum disorder (ASD) are characterized by difficulties in sensory integration and a body image which differs from the normal, healthy one. In order to enable healthy people to experience and thus develop a deeper understanding of the different body image that people with developmental disorders possess the author is pursuing research into the simulation of sensory experiences common to ASD. This study focuses on a particular sensory characteristic of ASD where difficulty is experienced in locating the source of sound in an environment and the development of a device for simulating the sense of hearing experienced in such a disorder. Workshops for children were carried out using the developed device, with interviews indicating that the majority of participants experienced feelings of ambiguity in relation to their own senses. Such a feeling in one's sensory boundaries is a phenomenon which is common in ASD research. It was concluded that the device developed in this study allowed people to vicariously experience the senses of people with ASD.","PeriodicalId":185756,"journal":{"name":"Proceedings of the Fourth International Conference on Human Agent Interaction","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127828747","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}