We believe that virtual humans presented over video chat services, such as Skype via smartphones, can be an effective way to deliver innovative applications where social interactions are important, such as counseling and coaching. We hypothesize that the context of a smartphone communication channel, i.e. how a virtual human is presented within a smartphone app, and indeed, the nature of that app, can profoundly affect how a real human perceives the virtual human. We have built an apparatus that allows virtual humans to initiate, receive, and interact over video calls using Skype or any similar service. With this platform, we are examining effective designs and social implications of virtual humans that interact over mobile video. The current study examines a relationship involving repeated counseling-style interactions with a virtual human, leveraging the virtual human's ability to call and interact with a real human on multiple occasions over a period of time. The results and implications of this preliminary study suggest that repeated interactions may improve perceived social characteristics of the virtual human.
{"title":"\"Hi, It's Me Again!\": Virtual Coaches over Mobile Video","authors":"Sin-Hwa Kang, D. Krum, Thai-Binh Phan, M. Bolas","doi":"10.1145/2814940.2814970","DOIUrl":"https://doi.org/10.1145/2814940.2814970","url":null,"abstract":"We believe that virtual humans presented over video chat services, such as Skype via smartphones, can be an effective way to deliver innovative applications where social interactions are important, such as counseling and coaching. We hypothesize that the context of a smartphone communication channel, i.e. how a virtual human is presented within a smartphone app, and indeed, the nature of that app, can profoundly affect how a real human perceives the virtual human. We have built an apparatus that allows virtual humans to initiate, receive, and interact over video calls using Skype or any similar service. With this platform, we are examining effective designs and social implications of virtual humans that interact over mobile video. The current study examines a relationship involving repeated counseling-style interactions with a virtual human, leveraging the virtual human's ability to call and interact with a real human on multiple occasions over a period of time. The results and implications of this preliminary study suggest that repeated interactions may improve perceived social characteristics of the virtual human.","PeriodicalId":427567,"journal":{"name":"Proceedings of the 3rd International Conference on Human-Agent Interaction","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131206151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Johann Wentzel, Daniel J. Rea, J. Young, E. Sharlin
This work proposes the concept of shared presence, where we enable a user to "become" a co-located humanoid robot while still being able to use their real body to complete tasks. The user controls the robot and sees with its vision and sensors, while still maintaining awareness and use of their real body for tasks other than controlling the robot. This shared presence can be used to accomplish tasks that are difficult for one person alone, for example, a robot manipulating a circuit board for easier soldering by the user, lifting and manipulating heavy or unwieldy objects together, or generally having the robot conduct and complete secondary tasks while the user focuses on the primary tasks. If people are able to overcome the cognitive difficulty of maintaining presence for both themselves and a nearby remote entity, tasks that typically require the use of two people could simply require one person assisted by a humanoid robot that they control. In this work, we explore some of the challenges of creating such a system, propose research questions for shared presence, and present our initial implementation that can enable shared presence. We believe shared presence opens up a new research direction that can be applied to many fields, including manufacturing, home-assistant robotics, and education.
{"title":"Shared Presence and Collaboration Using a Co-Located Humanoid Robot","authors":"Johann Wentzel, Daniel J. Rea, J. Young, E. Sharlin","doi":"10.1145/2814940.2814995","DOIUrl":"https://doi.org/10.1145/2814940.2814995","url":null,"abstract":"This work proposes the concept of shared presence, where we enable a user to \"become\" a co-located humanoid robot while still being able to use their real body to complete tasks. The user controls the robot and sees with its vision and sensors, while still maintaining awareness and use of their real body for tasks other than controlling the robot. This shared presence can be used to accomplish tasks that are difficult for one person alone, for example, a robot manipulating a circuit board for easier soldering by the user, lifting and manipulating heavy or unwieldy objects together, or generally having the robot conduct and complete secondary tasks while the user focuses on the primary tasks. If people are able to overcome the cognitive difficulty of maintaining presence for both themselves and a nearby remote entity, tasks that typically require the use of two people could simply require one person assisted by a humanoid robot that they control. In this work, we explore some of the challenges of creating such a system, propose research questions for shared presence, and present our initial implementation that can enable shared presence. We believe shared presence opens up a new research direction that can be applied to many fields, including manufacturing, home-assistant robotics, and education.","PeriodicalId":427567,"journal":{"name":"Proceedings of the 3rd International Conference on Human-Agent Interaction","volume":"105 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134167579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Due to food culture, religion, allergy and food intolerance we have to find a good system to help us recognize our food. In this paper, we propose methods to recognize food and to show the ingredients using a bag-of-features (BoF) based on SURF detection features. We also propose bag of SURF features and bag-of HOG Features at the same time with the SURF feature detection to recognize the food items. In the experiment, we have achieved up to 72% of accuracy rate on a small food image dataset of 10 categories. Our experiments show that the proposed representation is significantly accurate at identifying food in the existing methods. Moreover, the enhancement of the visual dataset with more images will improve the accuracy rates, especially for the classes with high diversity.
由于饮食文化、宗教、过敏和食物不耐受,我们必须找到一个好的系统来帮助我们识别我们的食物。在本文中,我们提出了基于SURF检测特征的特征袋(BoF)来识别食物和显示成分的方法。在SURF特征检测的同时,我们还提出了bag of SURF特征和bag-of HOG特征来识别食品。在实验中,我们在一个包含10个类别的小型食品图像数据集上实现了高达72%的准确率。我们的实验表明,在现有的方法中,所提出的表示在识别食物方面是非常准确的。此外,使用更多的图像增强视觉数据集将提高准确率,特别是对于具有高多样性的类别。
{"title":"Food Image Recognition by Using Bag-of-SURF Features and HOG Features","authors":"Almarzooqi Ahmed, T. Ozeki","doi":"10.1145/2814940.2814968","DOIUrl":"https://doi.org/10.1145/2814940.2814968","url":null,"abstract":"Due to food culture, religion, allergy and food intolerance we have to find a good system to help us recognize our food. In this paper, we propose methods to recognize food and to show the ingredients using a bag-of-features (BoF) based on SURF detection features. We also propose bag of SURF features and bag-of HOG Features at the same time with the SURF feature detection to recognize the food items. In the experiment, we have achieved up to 72% of accuracy rate on a small food image dataset of 10 categories. Our experiments show that the proposed representation is significantly accurate at identifying food in the existing methods. Moreover, the enhancement of the visual dataset with more images will improve the accuracy rates, especially for the classes with high diversity.","PeriodicalId":427567,"journal":{"name":"Proceedings of the 3rd International Conference on Human-Agent Interaction","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133060258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Researchers in artificial intelligence and robotics have long debated whether robots are capable of possessing minds. We hypothesize that the mind is an abstract internal representation of an agent's input-output relationships, acquired through evolution to interact with others in a non-zero-sum game environment. Attributing mental states to others, based on their complex behaviors, enables an agent to understand another agent's current behavior and predict its future behavior. Therefore, behavioral complexity, i.e., complex sensory input and motor output, might be an essential cue in attributing abstract mental states to others. To test this theory, we conducted experiments in which participants were asked to control a robot that exhibits either simple or complex input-output relationships in its behavior to achieve goals by pushing a button switch on a remote control device. We then measured participants' subjective impressions of the robot after a sudden change in the mapping between the button switch and motor output during the goal-oriented task. The results indicate that the complex relationship between inputs and a robot's behavioral output requires greater abstraction and induces humans to attribute mental states to the robot in contrast to a simple relationship scenario.
{"title":"Effects of Behavioral Complexity on Intention Attribution to Robots","authors":"Yuto Imamura, K. Terada, Hideyuki Takahashi","doi":"10.1145/2814940.2814949","DOIUrl":"https://doi.org/10.1145/2814940.2814949","url":null,"abstract":"Researchers in artificial intelligence and robotics have long debated whether robots are capable of possessing minds. We hypothesize that the mind is an abstract internal representation of an agent's input-output relationships, acquired through evolution to interact with others in a non-zero-sum game environment. Attributing mental states to others, based on their complex behaviors, enables an agent to understand another agent's current behavior and predict its future behavior. Therefore, behavioral complexity, i.e., complex sensory input and motor output, might be an essential cue in attributing abstract mental states to others. To test this theory, we conducted experiments in which participants were asked to control a robot that exhibits either simple or complex input-output relationships in its behavior to achieve goals by pushing a button switch on a remote control device. We then measured participants' subjective impressions of the robot after a sudden change in the mapping between the button switch and motor output during the goal-oriented task. The results indicate that the complex relationship between inputs and a robot's behavioral output requires greater abstraction and induces humans to attribute mental states to the robot in contrast to a simple relationship scenario.","PeriodicalId":427567,"journal":{"name":"Proceedings of the 3rd International Conference on Human-Agent Interaction","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115008270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sin-Hwa Kang, Andrew W. Feng, A. Leuski, D. Casas, Ari Shapiro
This study explores presentation techniques for a 3D animated chat-based virtual human that communicates engagingly with users. Interactions with the virtual human occur via a smartphone outside of the lab in natural settings. Our work compares the responses of users who interact with no image or a static image of a virtual character as opposed to the animated visage of a virtual human capable of displaying appropriate nonverbal behavior. We further investigate users' responses to the animated character's gaze aversion which displayed the character's act of looking away from users and was presented as a listening behavior. The findings of our study demonstrate that people tend to engage in conversation more by talking for a longer amount of time when they interact with a 3D animated virtual human that averts its gaze, compared to an animated virtual human that does not avert its gaze, a static image of a virtual character, or an audio-only interface.
{"title":"The Effect of An Animated Virtual Character on Mobile Chat Interactions","authors":"Sin-Hwa Kang, Andrew W. Feng, A. Leuski, D. Casas, Ari Shapiro","doi":"10.1145/2814940.2814957","DOIUrl":"https://doi.org/10.1145/2814940.2814957","url":null,"abstract":"This study explores presentation techniques for a 3D animated chat-based virtual human that communicates engagingly with users. Interactions with the virtual human occur via a smartphone outside of the lab in natural settings. Our work compares the responses of users who interact with no image or a static image of a virtual character as opposed to the animated visage of a virtual human capable of displaying appropriate nonverbal behavior. We further investigate users' responses to the animated character's gaze aversion which displayed the character's act of looking away from users and was presented as a listening behavior. The findings of our study demonstrate that people tend to engage in conversation more by talking for a longer amount of time when they interact with a 3D animated virtual human that averts its gaze, compared to an animated virtual human that does not avert its gaze, a static image of a virtual character, or an audio-only interface.","PeriodicalId":427567,"journal":{"name":"Proceedings of the 3rd International Conference on Human-Agent Interaction","volume":"212 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133425093","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hirotaka Osawa, Wataru Kayano, T. Miura, Wataru Endo
An office appliance, such as a multi-function printer (MFP) that combines a printer, copy machine, scanner, and facsimile, requires users to simultaneously learn both the manipulation of real-world objects and their abstract representation in a virtual world. Although many MFPs are provided in most offices and stores, their services are often not fully utilized because users deem their features too difficult to understand. We therefore propose a 'transitional explainer,' an agent that instructs users about the features of MFPs by mixing real- and virtual-world representations. Blended reality has been proposed as part of augmented reality. It involves the blending of virtual and real expressions to leverage their combined advantages. In this study, we utilize the advantages of blended reality to show users how to operate complex appliances by extending anthropomorphized explanations. A self-explanatory style is used by the appliance itself to improve the user's ability to remember features and enhance the motivation of all users, especially older ones, to learn the functions. With this system, users interact with the transitional agent and thereby learn how to use the MFP. The agent hides its real eyes and arms in onscreen mode, and extends them in real-world mode. We implemented the transitional explainer for realizing the blended reality agent in the MFP. In addition, we evaluated how this transitional expression supports users' understanding of how to manipulate the MFP and enhances users' motivation to use it.
{"title":"Transitional Explainer: Instruct Functions in the Real World and Onscreen in Multi-Function Printer","authors":"Hirotaka Osawa, Wataru Kayano, T. Miura, Wataru Endo","doi":"10.1145/2814940.2814945","DOIUrl":"https://doi.org/10.1145/2814940.2814945","url":null,"abstract":"An office appliance, such as a multi-function printer (MFP) that combines a printer, copy machine, scanner, and facsimile, requires users to simultaneously learn both the manipulation of real-world objects and their abstract representation in a virtual world. Although many MFPs are provided in most offices and stores, their services are often not fully utilized because users deem their features too difficult to understand. We therefore propose a 'transitional explainer,' an agent that instructs users about the features of MFPs by mixing real- and virtual-world representations. Blended reality has been proposed as part of augmented reality. It involves the blending of virtual and real expressions to leverage their combined advantages. In this study, we utilize the advantages of blended reality to show users how to operate complex appliances by extending anthropomorphized explanations. A self-explanatory style is used by the appliance itself to improve the user's ability to remember features and enhance the motivation of all users, especially older ones, to learn the functions. With this system, users interact with the transitional agent and thereby learn how to use the MFP. The agent hides its real eyes and arms in onscreen mode, and extends them in real-world mode. We implemented the transitional explainer for realizing the blended reality agent in the MFP. In addition, we evaluated how this transitional expression supports users' understanding of how to manipulate the MFP and enhances users' motivation to use it.","PeriodicalId":427567,"journal":{"name":"Proceedings of the 3rd International Conference on Human-Agent Interaction","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122498219","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Komei Hasegawa, Seigo Furuya, Yusuke Kanai, M. Imai
In this paper, we propose DECoReS (Degree Expressional Command Reproducing System) that allows a powered wheelchair to travel autonomously through commands that include "degree expressions" depending on particular users and environments. When users control a wheelchair through voice commands, they can sometimes give such orders as "go straight speedily" and "curve to the right widely" to qualify the traveling commands. As these examples illustrate, optional words called "degree expression" are appended to the commands. Because degree expressions are ambiguous, traveling styles described with such expressions are altered depending on the users and environments. DECoReS realizes the travels suited per user by learning degree expressional commands and traveling data from the users. DECoReS also reproduces travels suited for a current environment that a user is about to drive by exacting the data with a map similar to the current environment. Our experiments show that DECoReS can reproduce different travels depending on degree expressional commands, users, and environments.
{"title":"DECoReS: Degree Expressional Command Reproducing System for Autonomous Wheelchairs","authors":"Komei Hasegawa, Seigo Furuya, Yusuke Kanai, M. Imai","doi":"10.1145/2814940.2814942","DOIUrl":"https://doi.org/10.1145/2814940.2814942","url":null,"abstract":"In this paper, we propose DECoReS (Degree Expressional Command Reproducing System) that allows a powered wheelchair to travel autonomously through commands that include \"degree expressions\" depending on particular users and environments. When users control a wheelchair through voice commands, they can sometimes give such orders as \"go straight speedily\" and \"curve to the right widely\" to qualify the traveling commands. As these examples illustrate, optional words called \"degree expression\" are appended to the commands. Because degree expressions are ambiguous, traveling styles described with such expressions are altered depending on the users and environments. DECoReS realizes the travels suited per user by learning degree expressional commands and traveling data from the users. DECoReS also reproduces travels suited for a current environment that a user is about to drive by exacting the data with a map similar to the current environment. Our experiments show that DECoReS can reproduce different travels depending on degree expressional commands, users, and environments.","PeriodicalId":427567,"journal":{"name":"Proceedings of the 3rd International Conference on Human-Agent Interaction","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117104363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Minho Lee, T. Omori, Hirotaka Osawa, Hyeyoung Park, J. Young
It is our great pleasure to welcome you to The Third International Conference on Human-Agent Interaction (HAI 2015) in Korea. HAI has grown beyond our expectations in last two years and this year's HAI continues its tradition of being the relevant forum for presentation of research results and experience reports on leading edge issues of Human-Agent Interaction. HAI gathers researchers from fields spanning engineering, computer science, psychology, sociology and cognitive science while covering diverse topics including Human-Virtual/Physical agent interaction, communication and interaction with smart home/smart cars, and modeling of those interactions. The mission of the conference is to share novel quantitative and qualitative research on human and artificial agent interaction and identify new directions for future research and development. HAI gives researchers and practitioners a unique opportunity to share their perspectives with others interested in the various aspects of human-agent interaction. This year HAI has three exciting keynote talks by the world leaders in areas related to human-agent interaction. We encourage participants to attend them. These valuable and insightful talks can and will guide us to a better understanding of various research issues and methods in the field: Designing the Robotic User Experience: Behavior and Appearance by Guy Hoffman (Assistant-Professor, IDC Herzlyia) Understanding Human Internal States: I Know What You Are and What You Think by Soo-Young Lee (Professor, KAIST) The Evolutionary Origins of Human Cognitive Development: Insights from Research on Chimpanzees by Tetsuro Matsuzawa (Professor, Kyoto University) We also have 23 oral presentations, 51 poster presentations and 2 workshops. All of them present latest research results and ideas. Discussion between researchers from all over the world will be exciting.
{"title":"Proceedings of the 3rd International Conference on Human-Agent Interaction","authors":"Minho Lee, T. Omori, Hirotaka Osawa, Hyeyoung Park, J. Young","doi":"10.1145/2814940","DOIUrl":"https://doi.org/10.1145/2814940","url":null,"abstract":"It is our great pleasure to welcome you to The Third International Conference on Human-Agent Interaction (HAI 2015) in Korea. HAI has grown beyond our expectations in last two years and this year's HAI continues its tradition of being the relevant forum for presentation of research results and experience reports on leading edge issues of Human-Agent Interaction. HAI gathers researchers from fields spanning engineering, computer science, psychology, sociology and cognitive science while covering diverse topics including Human-Virtual/Physical agent interaction, communication and interaction with smart home/smart cars, and modeling of those interactions. \u0000 \u0000The mission of the conference is to share novel quantitative and qualitative research on human and artificial agent interaction and identify new directions for future research and development. HAI gives researchers and practitioners a unique opportunity to share their perspectives with others interested in the various aspects of human-agent interaction. \u0000 \u0000This year HAI has three exciting keynote talks by the world leaders in areas related to human-agent interaction. We encourage participants to attend them. These valuable and insightful talks can and will guide us to a better understanding of various research issues and methods in the field: \u0000Designing the Robotic User Experience: Behavior and Appearance by Guy Hoffman (Assistant-Professor, IDC Herzlyia) \u0000Understanding Human Internal States: I Know What You Are and What You Think by Soo-Young Lee (Professor, KAIST) \u0000The Evolutionary Origins of Human Cognitive Development: Insights from Research on Chimpanzees by Tetsuro Matsuzawa (Professor, Kyoto University) \u0000 \u0000 \u0000 \u0000We also have 23 oral presentations, 51 poster presentations and 2 workshops. All of them present latest research results and ideas. Discussion between researchers from all over the world will be exciting.","PeriodicalId":427567,"journal":{"name":"Proceedings of the 3rd International Conference on Human-Agent Interaction","volume":"295 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123088674","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Humans can communicate because they adapt and adjust their behavior to each other. Developing a relationship with an unknown artifact, on the other hand, is difficult. To address this problem, some robots utilize the context of the interaction between humans. However, there has been little investigation on interaction when no information about the interaction partner has been provided and where there has been no experimental task. Clarification of how people perceive unknown objects as agents is required. We believe that a stage of subconscious interaction plays a role in this process. We created an experimental environment to observe the interaction between a human and a robot whose behavior was actually mapped by another human. The participants were required to verbalize what they were thinking or feeling while interacting with the robot. The results of our experiment suggest that the timing of movement was used as the cue for interaction development. We need to verify the effects of other interaction patterns and inspect what kind of action and reaction are regarded as signals that enhance interpersonal interaction.
{"title":"Model of Agency Identification through Subconscious Embodied Interaction","authors":"Takafumi Sakamoto, Yugo Takeuchi","doi":"10.1145/2814940.2814950","DOIUrl":"https://doi.org/10.1145/2814940.2814950","url":null,"abstract":"Humans can communicate because they adapt and adjust their behavior to each other. Developing a relationship with an unknown artifact, on the other hand, is difficult. To address this problem, some robots utilize the context of the interaction between humans. However, there has been little investigation on interaction when no information about the interaction partner has been provided and where there has been no experimental task. Clarification of how people perceive unknown objects as agents is required. We believe that a stage of subconscious interaction plays a role in this process. We created an experimental environment to observe the interaction between a human and a robot whose behavior was actually mapped by another human. The participants were required to verbalize what they were thinking or feeling while interacting with the robot. The results of our experiment suggest that the timing of movement was used as the cue for interaction development. We need to verify the effects of other interaction patterns and inspect what kind of action and reaction are regarded as signals that enhance interpersonal interaction.","PeriodicalId":427567,"journal":{"name":"Proceedings of the 3rd International Conference on Human-Agent Interaction","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115439932","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A demand response method is expected to reduce carbon dioxide emissions and make stable power supply. A demand control depends on electric power suppliers' intention in many cases and therefore users are forced to be inconvenient in their way of life. We now need to develop an innovative method which encourages users to participate in a demand response and electric power saving action without their patience. Usually, the electric power saving behaviors are motivated by pecuniary incentives or environmental concern such as carbon dioxide emissions reduction and power failure probability. On such background, this paper evaluates the effectiveness of the attempt to encourage people's engagement in electric power saving action with the method of presenting information such as numeral, visualized, and anthropomorphized information regarding pecuniary incentives and environmental concern. The result shows that the method of presenting anthropomorphized information strongly induced users' electric power saving behaviors than other methods.
{"title":"An Anthropomorphic Approach to Presenting Information on Demand Response Reflecting Household's Environmental Moral","authors":"T. Nakayama, Hirotaka Osawa, S. Okushima","doi":"10.1145/2814940.2815012","DOIUrl":"https://doi.org/10.1145/2814940.2815012","url":null,"abstract":"A demand response method is expected to reduce carbon dioxide emissions and make stable power supply. A demand control depends on electric power suppliers' intention in many cases and therefore users are forced to be inconvenient in their way of life. We now need to develop an innovative method which encourages users to participate in a demand response and electric power saving action without their patience. Usually, the electric power saving behaviors are motivated by pecuniary incentives or environmental concern such as carbon dioxide emissions reduction and power failure probability. On such background, this paper evaluates the effectiveness of the attempt to encourage people's engagement in electric power saving action with the method of presenting information such as numeral, visualized, and anthropomorphized information regarding pecuniary incentives and environmental concern. The result shows that the method of presenting anthropomorphized information strongly induced users' electric power saving behaviors than other methods.","PeriodicalId":427567,"journal":{"name":"Proceedings of the 3rd International Conference on Human-Agent Interaction","volume":"6 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128819099","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}