Gaze signals are frequently used by the sighted in social interactions as visual cues. However, these signals and cues are hardly accessible for people with visual disability. A conceptual design of E-Gaze glasses is proposed, assistive to create gaze communication between blind and sighted people in face-to-face conversations. We interviewed 20 totally blind and low vision participants to envision the use of the E-Gaze. We explained four features of E-Gaze to participants using persona and use scenarios. Participants discussed the features on their usefulness, efficiency and interest. The results helped us clarify the design direction and further research.
{"title":"E-Gaze: Create Gaze Communication for People with Visual Disability","authors":"S. Qiu, Hirotaka Osawa, Jun Hu, G.W.M. Rauterberg","doi":"10.1145/2814940.2814974","DOIUrl":"https://doi.org/10.1145/2814940.2814974","url":null,"abstract":"Gaze signals are frequently used by the sighted in social interactions as visual cues. However, these signals and cues are hardly accessible for people with visual disability. A conceptual design of E-Gaze glasses is proposed, assistive to create gaze communication between blind and sighted people in face-to-face conversations. We interviewed 20 totally blind and low vision participants to envision the use of the E-Gaze. We explained four features of E-Gaze to participants using persona and use scenarios. Participants discussed the features on their usefulness, efficiency and interest. The results helped us clarify the design direction and further research.","PeriodicalId":427567,"journal":{"name":"Proceedings of the 3rd International Conference on Human-Agent Interaction","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134138167","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Within the next few years, personal robots are expected to enter our homes, offices, schools, hospitals, construction sites, and workshops. For these robots to play a successful role in people's professional and personal lives, they need to display the kind of efficient and satisfying interaction that humans are accustomed to from each other. Designing this human-robot interaction is a multifaceted challenge, balancing requirements of the robot's appearance and behavior. A robot's appearance evokes interaction affordances and triggers emotional responses; its behavior communicates internal states, and can support action coordination and joint planning. Good HRI design should enlist both facets to enable untrained humans to work fluently and intuitively with the robot. In this talk I will present the approach we have been using in the past decade to develop several non-anthropomorphic robotic systems. The underlying principles of both appearance and behavioral design are movement, timing, and embodiment, acknowledging that human perception is highly sensitive to spatial cues, physical movement, and visual affordances. We design our robots' appearance using techniques from 3D animation, sculpture, industrial, and interaction design. Gestures and behaviors drive decisions on the robot's appearance and mechanical design. Starting from freehand sketches, the robot's personality is built as a computer animated character, setting the parameters and limits of the robot's degrees of freedom. Then, material and form studies are combined with functional requirements to settle on the final system design. I will exemplify this process on the design of several robots. On the behavioral side, we design around the notion of human-robot fluency---the ability to accurately mesh the robot's activity with that of a human partner. I present computational architectures rooted in timing, joint action, and embodied cognition. Specifically, I discuss anticipatory action for collaboration, and a model of priming through perceptual simulation. Both systems have been shown to have significant effects on the fluency of a human-robot team, and on humans' perception of the robot's intelligence, commitment, and even gender. I then describe an interactive robotic improvisation system that uses embodied gestures for simultaneous, yet responsive, joint musicianship.
{"title":"Designing Fluent Human-Robot Collaboration","authors":"Guy Hoffman","doi":"10.1145/2814940.2815016","DOIUrl":"https://doi.org/10.1145/2814940.2815016","url":null,"abstract":"Within the next few years, personal robots are expected to enter our homes, offices, schools, hospitals, construction sites, and workshops. For these robots to play a successful role in people's professional and personal lives, they need to display the kind of efficient and satisfying interaction that humans are accustomed to from each other. Designing this human-robot interaction is a multifaceted challenge, balancing requirements of the robot's appearance and behavior. A robot's appearance evokes interaction affordances and triggers emotional responses; its behavior communicates internal states, and can support action coordination and joint planning. Good HRI design should enlist both facets to enable untrained humans to work fluently and intuitively with the robot. In this talk I will present the approach we have been using in the past decade to develop several non-anthropomorphic robotic systems. The underlying principles of both appearance and behavioral design are movement, timing, and embodiment, acknowledging that human perception is highly sensitive to spatial cues, physical movement, and visual affordances. We design our robots' appearance using techniques from 3D animation, sculpture, industrial, and interaction design. Gestures and behaviors drive decisions on the robot's appearance and mechanical design. Starting from freehand sketches, the robot's personality is built as a computer animated character, setting the parameters and limits of the robot's degrees of freedom. Then, material and form studies are combined with functional requirements to settle on the final system design. I will exemplify this process on the design of several robots. On the behavioral side, we design around the notion of human-robot fluency---the ability to accurately mesh the robot's activity with that of a human partner. I present computational architectures rooted in timing, joint action, and embodied cognition. Specifically, I discuss anticipatory action for collaboration, and a model of priming through perceptual simulation. Both systems have been shown to have significant effects on the fluency of a human-robot team, and on humans' perception of the robot's intelligence, commitment, and even gender. I then describe an interactive robotic improvisation system that uses embodied gestures for simultaneous, yet responsive, joint musicianship.","PeriodicalId":427567,"journal":{"name":"Proceedings of the 3rd International Conference on Human-Agent Interaction","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127839779","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Misa Yoshizaki, Toshimasa Takai, Eri Takashima, Yusuke Suetsugu, Atsushi Hirota, Shogo Furuhashi, Takashi Uchida, Hirofumi Hayakawa, Yukiko Nishizaki, N. Oka
It is important that participants form a positive impression when they meet with a new robot for the first time. Therefore, we tried to change how participants evaluate a new robot by changing the surrounding environment in the experiment. Embodied Cognition is a theory that our cognition for a target is strongly influenced by physical stimulation of our own bodies. Our research investigated a hypothesis that a soft chair can make favorable impressions of a robot. We performed an experiment using a hard plastic chair and a soft cushioned chair. Although the results did not support the hypothesis, they suggested that the effect of embodied cognition can be different between males and females.
{"title":"Effect of Embodied Cognition on an Impression of a Robot","authors":"Misa Yoshizaki, Toshimasa Takai, Eri Takashima, Yusuke Suetsugu, Atsushi Hirota, Shogo Furuhashi, Takashi Uchida, Hirofumi Hayakawa, Yukiko Nishizaki, N. Oka","doi":"10.1145/2814940.2814972","DOIUrl":"https://doi.org/10.1145/2814940.2814972","url":null,"abstract":"It is important that participants form a positive impression when they meet with a new robot for the first time. Therefore, we tried to change how participants evaluate a new robot by changing the surrounding environment in the experiment. Embodied Cognition is a theory that our cognition for a target is strongly influenced by physical stimulation of our own bodies. Our research investigated a hypothesis that a soft chair can make favorable impressions of a robot. We performed an experiment using a hard plastic chair and a soft cushioned chair. Although the results did not support the hypothesis, they suggested that the effect of embodied cognition can be different between males and females.","PeriodicalId":427567,"journal":{"name":"Proceedings of the 3rd International Conference on Human-Agent Interaction","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126641604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ho-Gyeong Kim, Jihyeon Roh, Hwaran Lee, Geon-min Kim, Soo-Young Lee
Information and communication technologies supply data every day at incredibly increasing rate, however, almost all of the accumulated data are unlabeled and obtaining their labels is expensive and time-consuming. Among the raw data, selecting and labeling some samples expected to be more informative than others can enhance machines without high cost. This process is called selective sampling, essential part of active learning. So far, most researches have concentrated on classical uncertainty measures to acquire informative data, which is related to "exploitation" process of learning. However, when the initial labeled dataset is too small or biased, the early stage model can be unreliable and its decision boundary would be over-fitted to the initial data. Moreover, the obtained data by the exploitation strategy may exacerbate the model further. We introduced "exploration" strategy as well as "exploitation" strategy. In this paper, we employ Self-Organizing Maps (SOM), one of neural networks to estimate and explore data distribution. For exploitation, margin sampling is applied to the classifier, neural network with soft-max output layer. The effectiveness proposed methods are demonstrated on ILSVRC-2011 image classification task based on features extracted from well-trained Convolutional Neural Networks (CNN). Active learning with exploration strategy shows its potential by stabilizing the early stage model and reducing the classification error rate, and finally making it to be high-quality models.
{"title":"Active Learning for Large-scale Object Classification: from Exploration to Exploitation","authors":"Ho-Gyeong Kim, Jihyeon Roh, Hwaran Lee, Geon-min Kim, Soo-Young Lee","doi":"10.1145/2814940.2814989","DOIUrl":"https://doi.org/10.1145/2814940.2814989","url":null,"abstract":"Information and communication technologies supply data every day at incredibly increasing rate, however, almost all of the accumulated data are unlabeled and obtaining their labels is expensive and time-consuming. Among the raw data, selecting and labeling some samples expected to be more informative than others can enhance machines without high cost. This process is called selective sampling, essential part of active learning. So far, most researches have concentrated on classical uncertainty measures to acquire informative data, which is related to \"exploitation\" process of learning. However, when the initial labeled dataset is too small or biased, the early stage model can be unreliable and its decision boundary would be over-fitted to the initial data. Moreover, the obtained data by the exploitation strategy may exacerbate the model further. We introduced \"exploration\" strategy as well as \"exploitation\" strategy. In this paper, we employ Self-Organizing Maps (SOM), one of neural networks to estimate and explore data distribution. For exploitation, margin sampling is applied to the classifier, neural network with soft-max output layer. The effectiveness proposed methods are demonstrated on ILSVRC-2011 image classification task based on features extracted from well-trained Convolutional Neural Networks (CNN). Active learning with exploration strategy shows its potential by stabilizing the early stage model and reducing the classification error rate, and finally making it to be high-quality models.","PeriodicalId":427567,"journal":{"name":"Proceedings of the 3rd International Conference on Human-Agent Interaction","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115827511","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we propose spatial communication between a virtual agent and a user through common space in both virtual world and real space. For this purpose, we propose the virtual agent system SCoViA, which renders a synchronized synthesis of the agent's appearance corresponding to the user's relative position to the monitor based on synchronization with the user's motion parallax in order to realize human-agent communication in the real world. In this system, a real-time three-dimensional computer-generated (3DCG) agent is drawn from the changing viewpoint of the user in a virtual space corresponding to the position of the user's head as detected by face tracking. We conducted two verifications and discussed the spatial communication between a virtual agent and a user. First, we verified the effect of a synchronized redrawing of the virtual agent for the accurate recognition of a particular object in the real world. Next, we verified the approachability of the agent by reacting to the user's eye contact from a diagonal degree to the virtual agent. The results of the evaluations showed that the virtual agent's eye contact affected approachability regardless of the user's viewpoint and that our proposed system using motion parallax could significantly improve the accuracy of the agent's gazing position with each real object. Finally, we discuss the possibility of the real-world human-agent interaction using positional relationship among the agent, real objects, and the user.
{"title":"Spatial Communication and Recognition in Human-agent Interaction using Motion-parallax-based 3DCG Virtual Agent","authors":"Naoto Yoshida, Tomoko Yonezawa","doi":"10.1145/2814940.2814954","DOIUrl":"https://doi.org/10.1145/2814940.2814954","url":null,"abstract":"In this paper, we propose spatial communication between a virtual agent and a user through common space in both virtual world and real space. For this purpose, we propose the virtual agent system SCoViA, which renders a synchronized synthesis of the agent's appearance corresponding to the user's relative position to the monitor based on synchronization with the user's motion parallax in order to realize human-agent communication in the real world. In this system, a real-time three-dimensional computer-generated (3DCG) agent is drawn from the changing viewpoint of the user in a virtual space corresponding to the position of the user's head as detected by face tracking. We conducted two verifications and discussed the spatial communication between a virtual agent and a user. First, we verified the effect of a synchronized redrawing of the virtual agent for the accurate recognition of a particular object in the real world. Next, we verified the approachability of the agent by reacting to the user's eye contact from a diagonal degree to the virtual agent. The results of the evaluations showed that the virtual agent's eye contact affected approachability regardless of the user's viewpoint and that our proposed system using motion parallax could significantly improve the accuracy of the agent's gazing position with each real object. Finally, we discuss the possibility of the real-world human-agent interaction using positional relationship among the agent, real objects, and the user.","PeriodicalId":427567,"journal":{"name":"Proceedings of the 3rd International Conference on Human-Agent Interaction","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115080157","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Monitoring the concentration level of a learner is important to maximize the learning effect, giving proper feedback on tasks and to understand the performance of learners in tasks. In this paper, we propose a personal concentration level monitoring system when a user performs an online task on a computer by analyzing his/her pupillary response and eye-blinking pattern. We use low-priced web camera to detect eye blinking pattern and a portable eye tracker to detect pupillary response. Experimental results show good performance of the proposed concentration level monitoring system and suggest that it can be used for various real applications such as intelligent tutoring system, e-learning system, etc.
{"title":"Concentration Monitoring for Intelligent Tutoring System Based on Pupil and Eye-blink","authors":"Giyoung Lee, A. Ojha, Minho Lee","doi":"10.1145/2814940.2815000","DOIUrl":"https://doi.org/10.1145/2814940.2815000","url":null,"abstract":"Monitoring the concentration level of a learner is important to maximize the learning effect, giving proper feedback on tasks and to understand the performance of learners in tasks. In this paper, we propose a personal concentration level monitoring system when a user performs an online task on a computer by analyzing his/her pupillary response and eye-blinking pattern. We use low-priced web camera to detect eye blinking pattern and a portable eye tracker to detect pupillary response. Experimental results show good performance of the proposed concentration level monitoring system and suggest that it can be used for various real applications such as intelligent tutoring system, e-learning system, etc.","PeriodicalId":427567,"journal":{"name":"Proceedings of the 3rd International Conference on Human-Agent Interaction","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129902738","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Schneider, L. Süssenbach, I. Berger, F. Kummert
We present a concept for long-term feedback during robot assisted indoor cycling training. Our feedback model captures different aspects from sport motivation theory. Furthermore, we present our designed measurements to evaluate the robot's persuasiveness and user's compliance. We conducted an intensive 18-day isolation study in two campaigns (e.g. socially assistive robot vs. display instructed, n=16) in cooperation with the German Aerospace Center. The results show that users tend to comply to the robot's instructions and that there is a significant difference in compliance between the two conditions.
{"title":"Long-Term Feedback Mechanisms for Robotic Assisted Indoor Cycling Training","authors":"S. Schneider, L. Süssenbach, I. Berger, F. Kummert","doi":"10.1145/2814940.2814962","DOIUrl":"https://doi.org/10.1145/2814940.2814962","url":null,"abstract":"We present a concept for long-term feedback during robot assisted indoor cycling training. Our feedback model captures different aspects from sport motivation theory. Furthermore, we present our designed measurements to evaluate the robot's persuasiveness and user's compliance. We conducted an intensive 18-day isolation study in two campaigns (e.g. socially assistive robot vs. display instructed, n=16) in cooperation with the German Aerospace Center. The results show that users tend to comply to the robot's instructions and that there is a significant difference in compliance between the two conditions.","PeriodicalId":427567,"journal":{"name":"Proceedings of the 3rd International Conference on Human-Agent Interaction","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116127078","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose a speech dereverberation method in the time-frequency domain, based on an image deblurring algorithm. A reverberant speech magnitude can be modeled as a convolution of a clean speech with a reverberation filter in time-frequency domain. Then, dereverberation problem can be regarded as that of image deblurring. Therefore, the proposed method estimates the clean speech magnitude in the time-frequency domain by using the fast image deconvolution method with priors on sparsity of the clean speech magnitude gradient and exponentially decaying property of reverberation filters along the time axis. Then, scaling the reverberation speech magnitude by a mask obtained from the estimated clean one performs dereverberation. Experimental results show that the described method can enhance speech.
{"title":"A Method for Speech Dereverberation Based on an Image Deblurring Algorithm Using the Prior of Speech Magnitude Gradient Distribution in the Time-Frequency Domain","authors":"W. Jo, Ji-Won Cho, Changsoo Je, Hyung-Min Park","doi":"10.1145/2814940.2814992","DOIUrl":"https://doi.org/10.1145/2814940.2814992","url":null,"abstract":"We propose a speech dereverberation method in the time-frequency domain, based on an image deblurring algorithm. A reverberant speech magnitude can be modeled as a convolution of a clean speech with a reverberation filter in time-frequency domain. Then, dereverberation problem can be regarded as that of image deblurring. Therefore, the proposed method estimates the clean speech magnitude in the time-frequency domain by using the fast image deconvolution method with priors on sparsity of the clean speech magnitude gradient and exponentially decaying property of reverberation filters along the time axis. Then, scaling the reverberation speech magnitude by a mask obtained from the estimated clean one performs dereverberation. Experimental results show that the described method can enhance speech.","PeriodicalId":427567,"journal":{"name":"Proceedings of the 3rd International Conference on Human-Agent Interaction","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115151087","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Suh-Yeon Dong, Bo-Kyeong Kim, Kyeongho Lee, Soo-Young Lee
We propose a novel experiment paradigm to measure human trust on machine during a collaborative and egoistic theory-of-mind game. To show a different level of human trust on machine partners, we control the technical capability and humanlike cues of the autonomous agent in the cognitive experiments while recording participant's electroencephalography (EEG). The measured human trust values at various situations will be used to develop a dynamic trust model for efficient human-machine systems.
{"title":"A Preliminary Study on Human Trust Measurements by EEG for Human-Machine Interactions","authors":"Suh-Yeon Dong, Bo-Kyeong Kim, Kyeongho Lee, Soo-Young Lee","doi":"10.1145/2814940.2814993","DOIUrl":"https://doi.org/10.1145/2814940.2814993","url":null,"abstract":"We propose a novel experiment paradigm to measure human trust on machine during a collaborative and egoistic theory-of-mind game. To show a different level of human trust on machine partners, we control the technical capability and humanlike cues of the autonomous agent in the cognitive experiments while recording participant's electroencephalography (EEG). The measured human trust values at various situations will be used to develop a dynamic trust model for efficient human-machine systems.","PeriodicalId":427567,"journal":{"name":"Proceedings of the 3rd International Conference on Human-Agent Interaction","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133796893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The future role of android robots is discussed. Survey results reveal that people are capable of viewing androids as companions and will accept androids operating in various roles within human society; however, people are uncomfortable with androids operating in roles in which they are able to persuade humans to do something, affect humans' minds, or supervise young children. To examine people's response to an actual android robot within human society, an android capable of operating as "lecturer" was implemented and an evaluative experiment conducted. It was found that androids can evoke feelings of attentiveness that are much stronger than those evoked by existing media. Furthermore, use of the lecturer android did not improve efficiency. It is possible that people expected the android to have human-like abilities because of its human-like appearance. However, regardless of the cause, it is clear from the experiment that it is necessary to program androids with more human-like functions if they are to fill useful roles within human society.
{"title":"Future Roles for Android Robots: Survey and Trial","authors":"T. Hoshikawa, Kohei Ogawa, H. Ishiguro","doi":"10.1145/2814940.2814960","DOIUrl":"https://doi.org/10.1145/2814940.2814960","url":null,"abstract":"The future role of android robots is discussed. Survey results reveal that people are capable of viewing androids as companions and will accept androids operating in various roles within human society; however, people are uncomfortable with androids operating in roles in which they are able to persuade humans to do something, affect humans' minds, or supervise young children. To examine people's response to an actual android robot within human society, an android capable of operating as \"lecturer\" was implemented and an evaluative experiment conducted. It was found that androids can evoke feelings of attentiveness that are much stronger than those evoked by existing media. Furthermore, use of the lecturer android did not improve efficiency. It is possible that people expected the android to have human-like abilities because of its human-like appearance. However, regardless of the cause, it is clear from the experiment that it is necessary to program androids with more human-like functions if they are to fill useful roles within human society.","PeriodicalId":427567,"journal":{"name":"Proceedings of the 3rd International Conference on Human-Agent Interaction","volume":"231 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134389333","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}