Pub Date : 2020-08-01DOI: 10.1109/RO-MAN47096.2020.9223585
Shoudai Suzuki, M. N. Anuardi, Peeraya Sripian, N. Matsuhira, Midori Sugaya
Communication robots are now getting popular. In particular, partner robots, which can perform personal services, are in high demand. However, they can be prohibitively expensive. Therefore, we considered a multi-user robot with a virtual agent service which could satisfy user demands. But, several issues need to be solved in order to achieve this purpose. Firstly, there is no general service platform for such robots. Secondly, even if we use the multi-user robot by executing the virtual agent service, the physical shape, and other characteristics of the multi-user robot sometimes creates a strong impression on users. Therefore, we proposed a virtual agent service platform, and the robot features modification for a multi-user robot. The robot can autonomously adjust its position according to each user’s physiological signals, which based on emotion in real-time. We presented a preliminary evaluation to determine whether the proposed method could improve users’ robot experience even for the users who are not familiar with the robot at all.
{"title":"Multi-user Robot Impression with a Virtual Agent and Features Modification According to Real-time Emotion from Physiological Signals","authors":"Shoudai Suzuki, M. N. Anuardi, Peeraya Sripian, N. Matsuhira, Midori Sugaya","doi":"10.1109/RO-MAN47096.2020.9223585","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223585","url":null,"abstract":"Communication robots are now getting popular. In particular, partner robots, which can perform personal services, are in high demand. However, they can be prohibitively expensive. Therefore, we considered a multi-user robot with a virtual agent service which could satisfy user demands. But, several issues need to be solved in order to achieve this purpose. Firstly, there is no general service platform for such robots. Secondly, even if we use the multi-user robot by executing the virtual agent service, the physical shape, and other characteristics of the multi-user robot sometimes creates a strong impression on users. Therefore, we proposed a virtual agent service platform, and the robot features modification for a multi-user robot. The robot can autonomously adjust its position according to each user’s physiological signals, which based on emotion in real-time. We presented a preliminary evaluation to determine whether the proposed method could improve users’ robot experience even for the users who are not familiar with the robot at all.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114621121","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-01DOI: 10.1109/ro-man47096.2020.9223538
{"title":"[RO-MAN 2020 Front matter]","authors":"","doi":"10.1109/ro-man47096.2020.9223538","DOIUrl":"https://doi.org/10.1109/ro-man47096.2020.9223538","url":null,"abstract":"","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124801524","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-01DOI: 10.1109/RO-MAN47096.2020.9223570
Viktor Schmuck, Tingran Sheng, O. Çeliktutan
Detecting people interacting and conversing with each other is essential to equipping social robots with autonomous navigation and service capabilities in crowded social scenes. In this paper, we introduced a method for unsupervised conversational group detection in images captured from a mobile robot's perspective. To this end, we collected a novel dataset called Robocentric Indoor Crowd Analysis (RICA). The RICA dataset features over 100,000 RGB, depth, and wide- angle camera images as well as LIDAR readings, recorded during a social event where the robot navigated between participants and captured interactions among groups using its on-board sensors. Using the RICA dataset, we implemented an unsupervised group detection method based on agglomerative hierarchical clustering. Our results show that incorporating the depth modality and using normalised features in the clustering algorithm improved group detection accuracy by a margin of 3% on average.
{"title":"Robocentric Conversational Group Discovery","authors":"Viktor Schmuck, Tingran Sheng, O. Çeliktutan","doi":"10.1109/RO-MAN47096.2020.9223570","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223570","url":null,"abstract":"Detecting people interacting and conversing with each other is essential to equipping social robots with autonomous navigation and service capabilities in crowded social scenes. In this paper, we introduced a method for unsupervised conversational group detection in images captured from a mobile robot's perspective. To this end, we collected a novel dataset called Robocentric Indoor Crowd Analysis (RICA). The RICA dataset features over 100,000 RGB, depth, and wide- angle camera images as well as LIDAR readings, recorded during a social event where the robot navigated between participants and captured interactions among groups using its on-board sensors. Using the RICA dataset, we implemented an unsupervised group detection method based on agglomerative hierarchical clustering. Our results show that incorporating the depth modality and using normalised features in the clustering algorithm improved group detection accuracy by a margin of 3% on average.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123422407","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-01DOI: 10.1109/RO-MAN47096.2020.9223472
K. Makino, F. Iwamoto, Hiromi Watanabe, Tadashi Sato, H. Terada, Naoto Sekiya
The accuracy of the inspection of the capsule endoscope is increased, if it can be manipulated by the operator with wireless communication. To develop it, the feasibility is important, however there are various studies. We consider the behavior of the endoscope in the broken-down. Therefore, this paper describes the manipulatable capsule endoscope that can behaves as the normal endoscope even if it is broken in the body of the patient, and that is not modified from the normal endoscope drastically. The fin for the maneuverability is knit using the biodegradable string for the surgical operation which is dissolved in the body, since the various shape can be realized. The safety is guaranteed, since the fin is dissolved in case that it comes off in the body. And, the small motor is employed as the actuator for the movement of the fin in the fundamental experiment to prevent from changing the shape of the capsule endoscope. The proposed endoscpoe can behave as the normal capsule endoscope, since the shape is similar to the normal capsule endoscope. Using it, the feasibility of the proposed endoscope is confirmed by the fundamental experiments.
{"title":"Study on a Manipulatable Endoscope with Fins Knit by a Biodegradable String","authors":"K. Makino, F. Iwamoto, Hiromi Watanabe, Tadashi Sato, H. Terada, Naoto Sekiya","doi":"10.1109/RO-MAN47096.2020.9223472","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223472","url":null,"abstract":"The accuracy of the inspection of the capsule endoscope is increased, if it can be manipulated by the operator with wireless communication. To develop it, the feasibility is important, however there are various studies. We consider the behavior of the endoscope in the broken-down. Therefore, this paper describes the manipulatable capsule endoscope that can behaves as the normal endoscope even if it is broken in the body of the patient, and that is not modified from the normal endoscope drastically. The fin for the maneuverability is knit using the biodegradable string for the surgical operation which is dissolved in the body, since the various shape can be realized. The safety is guaranteed, since the fin is dissolved in case that it comes off in the body. And, the small motor is employed as the actuator for the movement of the fin in the fundamental experiment to prevent from changing the shape of the capsule endoscope. The proposed endoscpoe can behave as the normal capsule endoscope, since the shape is similar to the normal capsule endoscope. Using it, the feasibility of the proposed endoscope is confirmed by the fundamental experiments.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"589 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121978053","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-01DOI: 10.1109/RO-MAN47096.2020.9223487
T. G. Thuruthel, S. H. Abidi, M. Cianchetti, C. Laschi, E. Falotico
Soft robotic grippers are shown to be high effective for grasping unstructured objects with simple sensing and control strategies. However, they are still limited by their speed, sensing capabilities and actuation mechanism. Hence, their usage have been restricted in highly dynamic grasping tasks. This paper presents a soft robotic gripper with tunable bistable properties for sensor-less dynamic grasping. The bistable mechanism allows us to store arbitrarily large strain energy in the soft system which is then released upon contact. The mechanism also provides flexibility on the type of actuation mechanism as the grasping and sensing phase is completely passive. Theoretical background behind the mechanism is presented with finite element analysis to provide insights into design parameters. Finally, we experimentally demonstrate sensor-less dynamic grasping of an unknown object within 0.02 seconds, including the time to sense and actuate.
{"title":"A bistable soft gripper with mechanically embedded sensing and actuation for fast grasping","authors":"T. G. Thuruthel, S. H. Abidi, M. Cianchetti, C. Laschi, E. Falotico","doi":"10.1109/RO-MAN47096.2020.9223487","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223487","url":null,"abstract":"Soft robotic grippers are shown to be high effective for grasping unstructured objects with simple sensing and control strategies. However, they are still limited by their speed, sensing capabilities and actuation mechanism. Hence, their usage have been restricted in highly dynamic grasping tasks. This paper presents a soft robotic gripper with tunable bistable properties for sensor-less dynamic grasping. The bistable mechanism allows us to store arbitrarily large strain energy in the soft system which is then released upon contact. The mechanism also provides flexibility on the type of actuation mechanism as the grasping and sensing phase is completely passive. Theoretical background behind the mechanism is presented with finite element analysis to provide insights into design parameters. Finally, we experimentally demonstrate sensor-less dynamic grasping of an unknown object within 0.02 seconds, including the time to sense and actuate.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122481441","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-01DOI: 10.1109/RO-MAN47096.2020.9223531
Akilesh Rajavenkatanarayanan, Harish Ram Nambiappan, Maria Kyrarini, F. Makedon
Robots are increasingly present in environments shared with humans. Robots can cooperate with their human teammates to achieve common goals and complete tasks. This paper focuses on developing a real-time framework that assesses the cognitive load of a human while cooperating with a robot to complete a collaborative assembly task. The framework uses multi-modal sensory data from Electrocardiography (ECG) and Electrodermal Activity (EDA) sensors, extracts novel features from the data, and utilizes machine learning methodologies to detect high or low cognitive load. The developed framework was evaluated on a collaborative assembly scenario with a user study. The results show that the framework is able to reliably recognize high cognitive load and it is a first step in enabling robots to understand better about their human teammates.
{"title":"Towards a Real-Time Cognitive Load Assessment System for Industrial Human-Robot Cooperation","authors":"Akilesh Rajavenkatanarayanan, Harish Ram Nambiappan, Maria Kyrarini, F. Makedon","doi":"10.1109/RO-MAN47096.2020.9223531","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223531","url":null,"abstract":"Robots are increasingly present in environments shared with humans. Robots can cooperate with their human teammates to achieve common goals and complete tasks. This paper focuses on developing a real-time framework that assesses the cognitive load of a human while cooperating with a robot to complete a collaborative assembly task. The framework uses multi-modal sensory data from Electrocardiography (ECG) and Electrodermal Activity (EDA) sensors, extracts novel features from the data, and utilizes machine learning methodologies to detect high or low cognitive load. The developed framework was evaluated on a collaborative assembly scenario with a user study. The results show that the framework is able to reliably recognize high cognitive load and it is a first step in enabling robots to understand better about their human teammates.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131381146","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-01DOI: 10.1109/RO-MAN47096.2020.9223593
Nico Höllerich, D. Henrich
Robust and reliable perception plays an important role when humans engage into cooperation with robots in industrial or household settings. Various explicit and implicit communication modalities and perception methods can be used to recognize expressed intentions. Depending on the modality, different sensors, areas of observation, and perception methods need to be utilized. More modalities increase the complexity and costs of the setup. We consider the scenario of a cooperative task in a potentially noisy environment, where verbal communication is hardly feasible. Our goal is to investigate the importance of different, non-verbal communication modalities for intention recognition. To this end, we build upon an established benchmark study for human cooperation and investigate which input modalities contribute most towards recognizing the expressed intention. To measure the detection rate, we conducted a second study. Participants had to predict actions based on a stream of symbolic input data. Findings confirm the existence of a common gesture dictionary and the importance of hand tracking for action prediction when the number of feasible actions increases. The contribution of this work is a usage ranking of gestures and a comparison of input modalities to improve prediction capabilities in human-robot cooperation.
{"title":"Relevant Perception Modalities for Flexible Human-Robot Teams","authors":"Nico Höllerich, D. Henrich","doi":"10.1109/RO-MAN47096.2020.9223593","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223593","url":null,"abstract":"Robust and reliable perception plays an important role when humans engage into cooperation with robots in industrial or household settings. Various explicit and implicit communication modalities and perception methods can be used to recognize expressed intentions. Depending on the modality, different sensors, areas of observation, and perception methods need to be utilized. More modalities increase the complexity and costs of the setup. We consider the scenario of a cooperative task in a potentially noisy environment, where verbal communication is hardly feasible. Our goal is to investigate the importance of different, non-verbal communication modalities for intention recognition. To this end, we build upon an established benchmark study for human cooperation and investigate which input modalities contribute most towards recognizing the expressed intention. To measure the detection rate, we conducted a second study. Participants had to predict actions based on a stream of symbolic input data. Findings confirm the existence of a common gesture dictionary and the importance of hand tracking for action prediction when the number of feasible actions increases. The contribution of this work is a usage ranking of gestures and a comparison of input modalities to improve prediction capabilities in human-robot cooperation.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115053524","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-01DOI: 10.1109/RO-MAN47096.2020.9223347
E. Lagerstedt, Serge Thill
Human-robot interaction (HRI) is fundamentally concerned with studying the interaction between humans and robots. While it is still a relatively young field, it can draw inspiration from other disciplines studying human interaction with other types of agents. Often, such inspiration is sought from the study of human-computer interaction (HCI) and the social sciences studying human-human interaction (HHI). More rarely, the field also turns to human-animal interaction (HAI).In this paper, we identify two distinct underlying motivations for making such comparisons: to form a target to recreate or to obtain a benchmark (or baseline) for evaluation. We further highlight relevant (existing) overlap between HRI and HAI, and identify specific themes that are of particular interest for further trans-disciplinary exploration. At the same time, since robots and animals are clearly not the same, we also discuss important differences between HRI and HAI, their complementarity notwithstanding. The overall purpose of this discussion is thus to create an awareness of the potential mutual benefit between the two disciplines and to describe opportunities that exist for future work, both in terms of new domains to explore, and existing results to learn from.
{"title":"Benchmarks for evaluating human-robot interaction: lessons learned from human-animal interactions","authors":"E. Lagerstedt, Serge Thill","doi":"10.1109/RO-MAN47096.2020.9223347","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223347","url":null,"abstract":"Human-robot interaction (HRI) is fundamentally concerned with studying the interaction between humans and robots. While it is still a relatively young field, it can draw inspiration from other disciplines studying human interaction with other types of agents. Often, such inspiration is sought from the study of human-computer interaction (HCI) and the social sciences studying human-human interaction (HHI). More rarely, the field also turns to human-animal interaction (HAI).In this paper, we identify two distinct underlying motivations for making such comparisons: to form a target to recreate or to obtain a benchmark (or baseline) for evaluation. We further highlight relevant (existing) overlap between HRI and HAI, and identify specific themes that are of particular interest for further trans-disciplinary exploration. At the same time, since robots and animals are clearly not the same, we also discuss important differences between HRI and HAI, their complementarity notwithstanding. The overall purpose of this discussion is thus to create an awareness of the potential mutual benefit between the two disciplines and to describe opportunities that exist for future work, both in terms of new domains to explore, and existing results to learn from.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115001462","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-01DOI: 10.1109/RO-MAN47096.2020.9223607
Jiasi Gao, Meng Wang, Y. Zhu, Haipeng Mi
This paper presents LinkBricks, a creative construction kit for intuitively creating and programming interactive robots towards young children. Integrating building blocks, a hierarchical programming framework and a tablet application, this kit is proposed to maintain the low floor and wide walls for children who lack knowledge in conventional programming. The blocks have LEGO-compatible interlock structures and are embedded with various wireless sensors and actuators to create different interactive robots. The programming application is easy-to-use and provides heuristics to involve children in the creative activities. A preliminary evaluation is conducted and indicates that LinkBricks increases young children’s engagement with, comfort with, and interest in working with interactive robots. Meanwhile, it has the potential of helping them to learn the concepts of programming and robots.
{"title":"LinkBricks: A Construction Kit for Intuitively Creating and Programming Interactive Robots","authors":"Jiasi Gao, Meng Wang, Y. Zhu, Haipeng Mi","doi":"10.1109/RO-MAN47096.2020.9223607","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223607","url":null,"abstract":"This paper presents LinkBricks, a creative construction kit for intuitively creating and programming interactive robots towards young children. Integrating building blocks, a hierarchical programming framework and a tablet application, this kit is proposed to maintain the low floor and wide walls for children who lack knowledge in conventional programming. The blocks have LEGO-compatible interlock structures and are embedded with various wireless sensors and actuators to create different interactive robots. The programming application is easy-to-use and provides heuristics to involve children in the creative activities. A preliminary evaluation is conducted and indicates that LinkBricks increases young children’s engagement with, comfort with, and interest in working with interactive robots. Meanwhile, it has the potential of helping them to learn the concepts of programming and robots.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129632426","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-08-01DOI: 10.1109/RO-MAN47096.2020.9223458
H. Sirithunge, K. S. Priyanayana, Ravindu T. Bandara, Nikolas Dahn, A. Jayasekara, Chandima Dedduwa Chandima
There are many cognitive and psychophysical theories to explain human behavior as well as the behavior of robots. Even so, we still lack a model to perceive and predict appropriate behaviors for both the human and the robot during a human-robot encounter. Humans make an instant evaluation of their surroundings and its people before approaching a person or a situation. As robots become more common in social environments, a similar perception of the situation around a human user prior to an interaction is required. Social constraints during an interaction could be demolished by a faulty assessment. Through this paper, we discuss the requirements of a robot to proactively perceive a situation’s nature and take an effort to report functional units which come into play during such an encounter. We further identify the cues that are utilized by such intelligent agents to simulate and evaluate the outcomes of their environment. From this, we discuss the requirements of a unified theory of cognition during human-robot encounters. We also highlight implications for design constraints in such a scenario.
{"title":"Tell me more! A Robot’s Struggle to Achieve Artificial Awareness","authors":"H. Sirithunge, K. S. Priyanayana, Ravindu T. Bandara, Nikolas Dahn, A. Jayasekara, Chandima Dedduwa Chandima","doi":"10.1109/RO-MAN47096.2020.9223458","DOIUrl":"https://doi.org/10.1109/RO-MAN47096.2020.9223458","url":null,"abstract":"There are many cognitive and psychophysical theories to explain human behavior as well as the behavior of robots. Even so, we still lack a model to perceive and predict appropriate behaviors for both the human and the robot during a human-robot encounter. Humans make an instant evaluation of their surroundings and its people before approaching a person or a situation. As robots become more common in social environments, a similar perception of the situation around a human user prior to an interaction is required. Social constraints during an interaction could be demolished by a faulty assessment. Through this paper, we discuss the requirements of a robot to proactively perceive a situation’s nature and take an effort to report functional units which come into play during such an encounter. We further identify the cues that are utilized by such intelligent agents to simulate and evaluate the outcomes of their environment. From this, we discuss the requirements of a unified theory of cognition during human-robot encounters. We also highlight implications for design constraints in such a scenario.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132312137","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}