We are interested in the social implications of working alongside robots. In this paper we look at a humanoid robot quality inspector, acting alongside workers in an assembly line. This setting is viable in small scale assembly lines where human assembly workers provide flexible, rapid assembly. A robotic quality inspector could enhance the quality assurance process, but places the robot in a position of relative seniority to the assembly workers. We present the results of an initial in-lab pilot study designed with our industry collaborators. In our pilot, a humanoid robot visually inspected participants' assembled products in a shared workspace and provided critiques that follow simple models of robotic social feedback. Our findings suggest that people's opinions of the robot (trust, impression of intelligence, etc.) changed based on the robot's social behaviors while it is judging the participant's work. Additionally, people rated the robot more negatively if they disagreed with the robot's opinions of their work, regardless of the robot social behavior and the value of its critique.
{"title":"Inspector Baxter: The Social Aspects of Integrating a Robot as a Quality Inspector in an Assembly Line","authors":"Amy Banh, Daniel J. Rea, J. Young, E. Sharlin","doi":"10.1145/2814940.2814955","DOIUrl":"https://doi.org/10.1145/2814940.2814955","url":null,"abstract":"We are interested in the social implications of working alongside robots. In this paper we look at a humanoid robot quality inspector, acting alongside workers in an assembly line. This setting is viable in small scale assembly lines where human assembly workers provide flexible, rapid assembly. A robotic quality inspector could enhance the quality assurance process, but places the robot in a position of relative seniority to the assembly workers. We present the results of an initial in-lab pilot study designed with our industry collaborators. In our pilot, a humanoid robot visually inspected participants' assembled products in a shared workspace and provided critiques that follow simple models of robotic social feedback. Our findings suggest that people's opinions of the robot (trust, impression of intelligence, etc.) changed based on the robot's social behaviors while it is judging the participant's work. Additionally, people rated the robot more negatively if they disagreed with the robot's opinions of their work, regardless of the robot social behavior and the value of its critique.","PeriodicalId":427567,"journal":{"name":"Proceedings of the 3rd International Conference on Human-Agent Interaction","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116723107","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ankit Prabhatbhai Patel, Minsoo Jang, Tony A. B. Y. Varkey, K. Lim
Various indoor models based on the femtocell deployment have been proposed in the literature. However none of the schemes provides a generic model for the indoor propagation, because some important parameters are missing related to the wall and floor effects scenario. All existing schemes considering only the thickness of wall and does not consider the ceiling height for the different floor effect. However the proposed new scheme takes materials of wall and no longer to thickness for the wall scenario in indoor propagation model. And, it is also considering the total height difference between the user equipment and access point for the floor scenario. By taking these parameters, in terms of mean absolute error, the proposed scheme has better performance than all other existing models. All the schemes were tested in a three-story building with different number of penetrated walls and floors.
{"title":"An Empirical Path Loss Model for Indoor Environments in 2.4GHz 802.11n Networks","authors":"Ankit Prabhatbhai Patel, Minsoo Jang, Tony A. B. Y. Varkey, K. Lim","doi":"10.1145/2814940.2814971","DOIUrl":"https://doi.org/10.1145/2814940.2814971","url":null,"abstract":"Various indoor models based on the femtocell deployment have been proposed in the literature. However none of the schemes provides a generic model for the indoor propagation, because some important parameters are missing related to the wall and floor effects scenario. All existing schemes considering only the thickness of wall and does not consider the ceiling height for the different floor effect. However the proposed new scheme takes materials of wall and no longer to thickness for the wall scenario in indoor propagation model. And, it is also considering the total height difference between the user equipment and access point for the floor scenario. By taking these parameters, in terms of mean absolute error, the proposed scheme has better performance than all other existing models. All the schemes were tested in a three-story building with different number of penetrated walls and floors.","PeriodicalId":427567,"journal":{"name":"Proceedings of the 3rd International Conference on Human-Agent Interaction","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117082970","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This Paper seeks to observe the changes in children from low-income families after being exposed to education through humanoid robots. An outreach education as it is, the study primarily focuses on the benefits of the said type of education.
{"title":"Outreach Education Utilizing Humanoid Type Agent Robots","authors":"Jeonghye Han, Ill-Woo Park, Minhyuk Park","doi":"10.1145/2814940.2814980","DOIUrl":"https://doi.org/10.1145/2814940.2814980","url":null,"abstract":"This Paper seeks to observe the changes in children from low-income families after being exposed to education through humanoid robots. An outreach education as it is, the study primarily focuses on the benefits of the said type of education.","PeriodicalId":427567,"journal":{"name":"Proceedings of the 3rd International Conference on Human-Agent Interaction","volume":"372 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121945658","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose a model for designing embodied virtual agents for use in online retailing. Such agents are necessary for transiting user internal states to manipulate the decision-making process. To satisfy this demand, the agents need to build a trustworthy relationship with users through interactions. In our model, user internal state transitions are derived from agent state transitions. The trustworthiness of agents changes according to agent states and is difficult to operate directly. We define two factors of trustworthiness, emotion and intelligence, and propose a way to transit these two factors of the agents to improve their trustworthiness.
{"title":"Transitions of User Internal States by Transition of Agent States","authors":"T. Matsui, S. Yamada","doi":"10.1145/2814940.2814964","DOIUrl":"https://doi.org/10.1145/2814940.2814964","url":null,"abstract":"We propose a model for designing embodied virtual agents for use in online retailing. Such agents are necessary for transiting user internal states to manipulate the decision-making process. To satisfy this demand, the agents need to build a trustworthy relationship with users through interactions. In our model, user internal state transitions are derived from agent state transitions. The trustworthiness of agents changes according to agent states and is difficult to operate directly. We define two factors of trustworthiness, emotion and intelligence, and propose a way to transit these two factors of the agents to improve their trustworthiness.","PeriodicalId":427567,"journal":{"name":"Proceedings of the 3rd International Conference on Human-Agent Interaction","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129608794","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In Advanced Driving Assistance Systems (ADASs), monitoring the driver's cognitive status during driving is considered as an important issue. Because, most of the accidents in the automotive sector occur due to the driver's misinterpretation or lack of sufficient information regarding the situation. In order to prevent these accidents, current ADASs include lane departure warning systems, vehicle detection systems, advanced cruise control systems, etc. In a particular driving scenario, the amount of information available to the driver regarding a situation can be judged by monitoring the driver's gaze (internal information) and distributions corresponding to the forward traffic (external information). Therefore, to provide sufficient information to the driver regarding a driving scenario it is essential to integrate the internal and external information which is lacking in the current ADASs. In this paper, we use 3D pose estimate algorithm (POSIT) to estimate driver's attention area. In order to estimate the distributions corresponding to the forward traffic we employ Bottom-up Saliency map. To integrate the internal and external information we use conditional mutual information.
{"title":"Monitoring Driver's Cognitive Status Based on Integration of Internal and External Information","authors":"Seonggyu Kim, R. Mallipeddi, Minho Lee","doi":"10.1145/2814940.2814999","DOIUrl":"https://doi.org/10.1145/2814940.2814999","url":null,"abstract":"In Advanced Driving Assistance Systems (ADASs), monitoring the driver's cognitive status during driving is considered as an important issue. Because, most of the accidents in the automotive sector occur due to the driver's misinterpretation or lack of sufficient information regarding the situation. In order to prevent these accidents, current ADASs include lane departure warning systems, vehicle detection systems, advanced cruise control systems, etc. In a particular driving scenario, the amount of information available to the driver regarding a situation can be judged by monitoring the driver's gaze (internal information) and distributions corresponding to the forward traffic (external information). Therefore, to provide sufficient information to the driver regarding a driving scenario it is essential to integrate the internal and external information which is lacking in the current ADASs. In this paper, we use 3D pose estimate algorithm (POSIT) to estimate driver's attention area. In order to estimate the distributions corresponding to the forward traffic we employ Bottom-up Saliency map. To integrate the internal and external information we use conditional mutual information.","PeriodicalId":427567,"journal":{"name":"Proceedings of the 3rd International Conference on Human-Agent Interaction","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125254428","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Facial expression recognition is an important topic in the field of human-agent interaction, because facial expression is simple and impressive signal which human can send to others. Though there have been numerous studies on facial image analysis, the performance of expression recognition is still not acceptable due to the diversity of human expression and enormous variations in facial images. In this paper, we try to improve the performance of facial expression recognition by using multi-task learning techniques of neural networks. Through computational experiments on a benchmark database, we show positive possibility of performance improvement using multi-task learning.
{"title":"Improving Performance of Facial Expression Recognition using Multi-task Learning of Neural Networks","authors":"Jeongin Seo, Changhun Hyun, Hyeyoung Park","doi":"10.1145/2814940.2815011","DOIUrl":"https://doi.org/10.1145/2814940.2815011","url":null,"abstract":"Facial expression recognition is an important topic in the field of human-agent interaction, because facial expression is simple and impressive signal which human can send to others. Though there have been numerous studies on facial image analysis, the performance of expression recognition is still not acceptable due to the diversity of human expression and enormous variations in facial images. In this paper, we try to improve the performance of facial expression recognition by using multi-task learning techniques of neural networks. Through computational experiments on a benchmark database, we show positive possibility of performance improvement using multi-task learning.","PeriodicalId":427567,"journal":{"name":"Proceedings of the 3rd International Conference on Human-Agent Interaction","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131971144","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Palmprint recognition is a challenging problem, mainly due to low quality of the patterns, variation in focal lens distance, large nonlinear deformations caused by contactless image acquisition system, and computational complexity for the large image size of typical palmprints. This paper proposes a new contactless biometric system using features of palm texture extracted from the single hand image acquired from a digital camera. In this work, we propose to apply convolutional neural network (CNN) for palmprint recognition. The results demonstrate that the extracted local and general features using CNN are invariant to image rotation, translation, and scale variations.
{"title":"Deformation Invariant and Contactless Palmprint Recognition Using Convolutional Neural Network","authors":"Amin Jalali, R. Mallipeddi, Minho Lee","doi":"10.1145/2814940.2814977","DOIUrl":"https://doi.org/10.1145/2814940.2814977","url":null,"abstract":"Palmprint recognition is a challenging problem, mainly due to low quality of the patterns, variation in focal lens distance, large nonlinear deformations caused by contactless image acquisition system, and computational complexity for the large image size of typical palmprints. This paper proposes a new contactless biometric system using features of palm texture extracted from the single hand image acquired from a digital camera. In this work, we propose to apply convolutional neural network (CNN) for palmprint recognition. The results demonstrate that the extracted local and general features using CNN are invariant to image rotation, translation, and scale variations.","PeriodicalId":427567,"journal":{"name":"Proceedings of the 3rd International Conference on Human-Agent Interaction","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122819092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we consider a problem of analyzing human behavioral data to predict the human cognitive states and generate corresponding actions of sever-agent. Specifically, we aim at predicting human cognitive states during meal time and generating relevant dining services for the human. For this study, we collect behavioral data using 2 kinds of wearable devices, which are an eye tracker and a watch type EDA device, during meal time. We focus on the characteristics of the behavioral data, which are heterogeneous, noisy and temporal, and suggest a novel machine learning algorithm which can analyze the data integrally. Suggested model has hierarchical structure: the bottom layer combines the multi-modal behavioral data based on causal structure of the data and extracts the feature vector. Using the extracted feature vectors, the upper layer predicts the cognitive states based on temporal correlation between feature vectors. Experimental results show that the suggested model can analyze the behavioral data efficiently and predict the human cognitive states correctly.
{"title":"Analyzing Human Behavioral Data to Interact with Restaurant Server Agents","authors":"Eun-Sol Kim, Kyoung-Woon On, Byoung-Tak Zhang","doi":"10.1145/2814940.2815013","DOIUrl":"https://doi.org/10.1145/2814940.2815013","url":null,"abstract":"In this paper, we consider a problem of analyzing human behavioral data to predict the human cognitive states and generate corresponding actions of sever-agent. Specifically, we aim at predicting human cognitive states during meal time and generating relevant dining services for the human. For this study, we collect behavioral data using 2 kinds of wearable devices, which are an eye tracker and a watch type EDA device, during meal time. We focus on the characteristics of the behavioral data, which are heterogeneous, noisy and temporal, and suggest a novel machine learning algorithm which can analyze the data integrally. Suggested model has hierarchical structure: the bottom layer combines the multi-modal behavioral data based on causal structure of the data and extracts the feature vector. Using the extracted feature vectors, the upper layer predicts the cognitive states based on temporal correlation between feature vectors. Experimental results show that the suggested model can analyze the behavioral data efficiently and predict the human cognitive states correctly.","PeriodicalId":427567,"journal":{"name":"Proceedings of the 3rd International Conference on Human-Agent Interaction","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123364095","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chan-Soon Lim, Jeong-Yean Yang, Minchul Shin, D. Kwon
Needs and studies on service robot are growing with rapid market extension. Generating human-like motions on robot is one research area of human-robot interaction. There are several studies about motion generation and humanoid design. However, kinematic analysis of humanoid design for service tasks should also performed. This paper focuses on motor alignment of humanoid shoulder. After we measure human greeting gestures with 3D capture camera and predefine workspace, simulation is performed to find motor alignment on humanoid shoulder to minimize total required motor power. Result shows that there are specific alignment for designing shoulder to perform greeting tasks on small motor powers.
{"title":"Kinematic Analysis of Motor Axis Alignment on Humanoid Shoulder for GreetinG","authors":"Chan-Soon Lim, Jeong-Yean Yang, Minchul Shin, D. Kwon","doi":"10.1145/2814940.2815009","DOIUrl":"https://doi.org/10.1145/2814940.2815009","url":null,"abstract":"Needs and studies on service robot are growing with rapid market extension. Generating human-like motions on robot is one research area of human-robot interaction. There are several studies about motion generation and humanoid design. However, kinematic analysis of humanoid design for service tasks should also performed. This paper focuses on motor alignment of humanoid shoulder. After we measure human greeting gestures with 3D capture camera and predefine workspace, simulation is performed to find motor alignment on humanoid shoulder to minimize total required motor power. Result shows that there are specific alignment for designing shoulder to perform greeting tasks on small motor powers.","PeriodicalId":427567,"journal":{"name":"Proceedings of the 3rd International Conference on Human-Agent Interaction","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129845689","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jooyeon Lee, Manri Cheon, Seong-eun Moon, Jong-Seok Lee
This paper explores the relations between perspectives of navigation and visual perception in 3D virtual space, by analyzing avatar-based navigation with the eye-gaze data. We examine how different perspectives and types of avatars affect the users' scopes of visual perception within 3D virtual environments. Throughout this research, we attempt to draw possible connections between the perspectives and cognitive patterns of visual perception. We propose that manipulating perspectives of avatars or those of users has immediate effects on the users' scopes of visual perception and patterns of visual attention.
{"title":"Gaze Analysis of Avatar-based Navigation with Different Perspectives in 3D Virtual Space","authors":"Jooyeon Lee, Manri Cheon, Seong-eun Moon, Jong-Seok Lee","doi":"10.1145/2814940.2814981","DOIUrl":"https://doi.org/10.1145/2814940.2814981","url":null,"abstract":"This paper explores the relations between perspectives of navigation and visual perception in 3D virtual space, by analyzing avatar-based navigation with the eye-gaze data. We examine how different perspectives and types of avatars affect the users' scopes of visual perception within 3D virtual environments. Throughout this research, we attempt to draw possible connections between the perspectives and cognitive patterns of visual perception. We propose that manipulating perspectives of avatars or those of users has immediate effects on the users' scopes of visual perception and patterns of visual attention.","PeriodicalId":427567,"journal":{"name":"Proceedings of the 3rd International Conference on Human-Agent Interaction","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115118753","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}