Pub Date : 2019-10-01DOI: 10.1109/RO-MAN46459.2019.8956328
Chirayu Asati, N. Meena, M. F. Orlando
People with visual disabilities are often dependent on external assistance which is provided by either humans, trained dogs, or other special electronic devices for decision making but there are certain limitations to these aids. Hence, an intelligent white cane is developed for visually challenged people which makes use of HR-SO4 ultrasonic sensors to detect any obstacle that lies in the range of the the sensor and determine its distance. The ultrasonic sensor has range up to 450 meters so that any object lying within this range can be easily detected and the warning signal is provided using the buzzer which gives beeping signals in order to alert the user for prompt action. Also, an intelligent technique of object detection and classification using the web camera which captures the image and hence classifies it is being used. The classification obtained is in the form of text which is further converted to audio signal using text-to-speech conversion which is implemented in Python using Espeak open source library.
{"title":"Development of an Intelligent Cane for Visually Impaired Human Subjects","authors":"Chirayu Asati, N. Meena, M. F. Orlando","doi":"10.1109/RO-MAN46459.2019.8956328","DOIUrl":"https://doi.org/10.1109/RO-MAN46459.2019.8956328","url":null,"abstract":"People with visual disabilities are often dependent on external assistance which is provided by either humans, trained dogs, or other special electronic devices for decision making but there are certain limitations to these aids. Hence, an intelligent white cane is developed for visually challenged people which makes use of HR-SO4 ultrasonic sensors to detect any obstacle that lies in the range of the the sensor and determine its distance. The ultrasonic sensor has range up to 450 meters so that any object lying within this range can be easily detected and the warning signal is provided using the buzzer which gives beeping signals in order to alert the user for prompt action. Also, an intelligent technique of object detection and classification using the web camera which captures the image and hence classifies it is being used. The classification obtained is in the form of text which is further converted to audio signal using text-to-speech conversion which is implemented in Python using Espeak open source library.","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"120 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116358142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/RO-MAN46459.2019.8956417
Carlo La Viola, Andrea Orlandini, A. Umbrico, A. Cesta
This paper presents a novel comprehensive framework called ROS-TiPlEx (Timeline-based Planning and Execution with ROS) to provide a shared environment in which experts in robotics and planning can easily interact to, respectively, encode information about low-level robot control and define task planning and execution models. ROS-TiPlEx aims at facilitating the interaction between both kind of experts, thus, enhancing and possibly speeding up the process of an integrated control design. ROS-TiPlEx is the first tool addressing the connection of ROS and timeline-based planning.
{"title":"ROS-TiPlEx: How to make experts in A.I. Planning and Robotics talk together and be happy","authors":"Carlo La Viola, Andrea Orlandini, A. Umbrico, A. Cesta","doi":"10.1109/RO-MAN46459.2019.8956417","DOIUrl":"https://doi.org/10.1109/RO-MAN46459.2019.8956417","url":null,"abstract":"This paper presents a novel comprehensive framework called ROS-TiPlEx (Timeline-based Planning and Execution with ROS) to provide a shared environment in which experts in robotics and planning can easily interact to, respectively, encode information about low-level robot control and define task planning and execution models. ROS-TiPlEx aims at facilitating the interaction between both kind of experts, thus, enhancing and possibly speeding up the process of an integrated control design. ROS-TiPlEx is the first tool addressing the connection of ROS and timeline-based planning.","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"82 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126133484","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/RO-MAN46459.2019.8956240
Azumi Ueno, Kotaro Hayashi, I. Mizuuchi
Even if a robot is not designed with a specific impression, if there is a means that can add an impression later to the robot, it will be useful for social robot design, we considered. In particular, anthropomorphism seems to be an important impression of designing social interaction between humans and robots. In the movie, ”STAR WARS,” there is a non-humanoid robot, called R2-D2, which communicates mainly by sounds. A humanoid interpreter robot, called C-3PO, responds to the sound of R2-D2 with natural language and gesture. And the audience finds the personality in R2-D2 richer than the personality which is based on the information which R2-D2’s sounds have. It might be possible to change the impression of a non-humanoid robot emitting simple sounds by communication with a humanoid robot that speaks a natural language and make gestures. We conducted an impression evaluation experiment. In the condition where robots are interacting, the observer evaluated anthropomorphism of the nonhumanoid robot more than in the non-interacting condition. There were also some other impressions that have changed.
{"title":"Impression Change on Nonverbal Non-Humanoid Robot by Interaction with Humanoid Robot","authors":"Azumi Ueno, Kotaro Hayashi, I. Mizuuchi","doi":"10.1109/RO-MAN46459.2019.8956240","DOIUrl":"https://doi.org/10.1109/RO-MAN46459.2019.8956240","url":null,"abstract":"Even if a robot is not designed with a specific impression, if there is a means that can add an impression later to the robot, it will be useful for social robot design, we considered. In particular, anthropomorphism seems to be an important impression of designing social interaction between humans and robots. In the movie, ”STAR WARS,” there is a non-humanoid robot, called R2-D2, which communicates mainly by sounds. A humanoid interpreter robot, called C-3PO, responds to the sound of R2-D2 with natural language and gesture. And the audience finds the personality in R2-D2 richer than the personality which is based on the information which R2-D2’s sounds have. It might be possible to change the impression of a non-humanoid robot emitting simple sounds by communication with a humanoid robot that speaks a natural language and make gestures. We conducted an impression evaluation experiment. In the condition where robots are interacting, the observer evaluated anthropomorphism of the nonhumanoid robot more than in the non-interacting condition. There were also some other impressions that have changed.","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121458242","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/RO-MAN46459.2019.8956335
Kasper Hald, M. Rehm, T. Moeslund
We propose a method of human-robot trust assessment in close-proximity human-robot collaboration involving body tracking for recognition of physical signs of apprehension. We tested this by performing skeleton tracking on 30 participant while they repeated a shared task with a Sawyer robot while reporting trust between tasks. We tested different robot velocity and environment conditions with an unannounced increase in velocity midway through to provoke a dip trust. Initial analysis show significant effect for the test conditions on participant movements and reported trust as well as linear correlations between tracked signs of apprehension and reported trust.
{"title":"Proposing Human-Robot Trust Assessment Through Tracking Physical Apprehension Signals in Close-Proximity Human-Robot Collaboration","authors":"Kasper Hald, M. Rehm, T. Moeslund","doi":"10.1109/RO-MAN46459.2019.8956335","DOIUrl":"https://doi.org/10.1109/RO-MAN46459.2019.8956335","url":null,"abstract":"We propose a method of human-robot trust assessment in close-proximity human-robot collaboration involving body tracking for recognition of physical signs of apprehension. We tested this by performing skeleton tracking on 30 participant while they repeated a shared task with a Sawyer robot while reporting trust between tasks. We tested different robot velocity and environment conditions with an unannounced increase in velocity midway through to provoke a dip trust. Initial analysis show significant effect for the test conditions on participant movements and reported trust as well as linear correlations between tracked signs of apprehension and reported trust.","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121479396","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/RO-MAN46459.2019.8956391
Neha Singh, Puneesh Deora, P. M. Pradhan
Time-frequency (TF) analysis through well-known TF tool namely S-transform (ST) has been extensively used for QRS detection in Electrocardiogram (ECG) signals. However, Gaussian window-based conventional ST suffers from poor TF resolution due to the fixed scaling criterion and the long taper of the Gaussian window. Many variants of ST using different scaling criteria have been reported in literature for improving the accuracy in the detection of QRS complexes. This paper presents the usefulness of zero-order prolate spheroidal wave function (PSWF) as a window kernel in ST. PSWF has ability to concentrate maximum energy in narrow and finite time and frequency intervals, and provides more flexibility in changing window characteristics. Synchrosqueezing transform is a post processing method that improves the energy concentration in a TFR remarkably. This paper proposes a PSWF-based synchrosqueezing ST for detection of R peaks in ECG signals. The results show that the proposed method accurately detects R peaks with a sensitivity, positive predictivity and accuracy of 99.96 %, 99. 96% and 99. 92% respectively. It also improves upon on existing techniques in terms of the aforementioned metrics and the search back range.
{"title":"Simultaneously Concentrated PSWF-based Synchrosqueezing S-transform and its application to R peak detection in ECG signal","authors":"Neha Singh, Puneesh Deora, P. M. Pradhan","doi":"10.1109/RO-MAN46459.2019.8956391","DOIUrl":"https://doi.org/10.1109/RO-MAN46459.2019.8956391","url":null,"abstract":"Time-frequency (TF) analysis through well-known TF tool namely S-transform (ST) has been extensively used for QRS detection in Electrocardiogram (ECG) signals. However, Gaussian window-based conventional ST suffers from poor TF resolution due to the fixed scaling criterion and the long taper of the Gaussian window. Many variants of ST using different scaling criteria have been reported in literature for improving the accuracy in the detection of QRS complexes. This paper presents the usefulness of zero-order prolate spheroidal wave function (PSWF) as a window kernel in ST. PSWF has ability to concentrate maximum energy in narrow and finite time and frequency intervals, and provides more flexibility in changing window characteristics. Synchrosqueezing transform is a post processing method that improves the energy concentration in a TFR remarkably. This paper proposes a PSWF-based synchrosqueezing ST for detection of R peaks in ECG signals. The results show that the proposed method accurately detects R peaks with a sensitivity, positive predictivity and accuracy of 99.96 %, 99. 96% and 99. 92% respectively. It also improves upon on existing techniques in terms of the aforementioned metrics and the search back range.","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"1 7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128781605","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/RO-MAN46459.2019.8956318
Sai Krishna Pathi, Annica Kristoffersson, A. Kiselev, A. Loutfi
In this paper, we present a model to propose an optimal placement for a robot in a social group interaction. Our model estimates the O-space according to the F-formation theory. The method automatically calculates a suitable placement for the robot. An evaluation of the method has been performed by conducting an experiment where participants stand in different formations and a robot is teleoperated to join the group. In one condition, the operator positions the robot according to the specified location given by our algorithm. In another condition, operators have the freedom to position the robot according to their personal choice. Follow-up questionnaires were performed to determine which of the placements were preferred by the participants. The results indicate that the proposed method for automatic placement of the robot is supported from the participants. The contribution of this work resides in a novel method to automatically estimate the best placement of the robot, as well as the results from user experiments to verify the quality of this method. These results suggest that teleoperated robots such as mobile robot telepresence systems could benefit from tools that assist operators in placing the robot in groups in a socially accepted manner.
{"title":"Estimating Optimal Placement for a Robot in Social Group Interaction","authors":"Sai Krishna Pathi, Annica Kristoffersson, A. Kiselev, A. Loutfi","doi":"10.1109/RO-MAN46459.2019.8956318","DOIUrl":"https://doi.org/10.1109/RO-MAN46459.2019.8956318","url":null,"abstract":"In this paper, we present a model to propose an optimal placement for a robot in a social group interaction. Our model estimates the O-space according to the F-formation theory. The method automatically calculates a suitable placement for the robot. An evaluation of the method has been performed by conducting an experiment where participants stand in different formations and a robot is teleoperated to join the group. In one condition, the operator positions the robot according to the specified location given by our algorithm. In another condition, operators have the freedom to position the robot according to their personal choice. Follow-up questionnaires were performed to determine which of the placements were preferred by the participants. The results indicate that the proposed method for automatic placement of the robot is supported from the participants. The contribution of this work resides in a novel method to automatically estimate the best placement of the robot, as well as the results from user experiments to verify the quality of this method. These results suggest that teleoperated robots such as mobile robot telepresence systems could benefit from tools that assist operators in placing the robot in groups in a socially accepted manner.","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132712293","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/RO-MAN46459.2019.8956381
Florent Levillain, D. St-Onge, G. Beltrame, E. Zibetti
The control of multiple robots in the context of tele-exploration tasks is often attentionally taxing, resulting in a loss of situational awareness for operators. Unmanned aerial vehicle swarms require significantly more multitasking than controlling a plane, thus making it necessary to devise intuitive feedback sources and control methods for these robots. The purpose of this article is to examine a swarm's nonverbal behaviour as a possible way to increase situational awareness and reduce the operators cognitive load by soliciting intuitions about the swarm's behaviour. To progress on the definition of a database of nonverbal expressions for robot swarms, we first define categories of communicative intents based on spontaneous descriptions of common swarm behaviours. The obtained typology confirms that the first two levels (as defined by Endsley: elements of environment and comprehension of the situation) can be shared through swarms motion-based communication. We then investigate group motion parameters potentially connected to these communicative intents. Results are that synchronized movement and tendency to form figures help convey meaningful information to the operator. We then discuss how this can be applied to realistic scenarios for the intuitive command of remote robotic teams.
{"title":"Towards situational awareness from robotic group motion","authors":"Florent Levillain, D. St-Onge, G. Beltrame, E. Zibetti","doi":"10.1109/RO-MAN46459.2019.8956381","DOIUrl":"https://doi.org/10.1109/RO-MAN46459.2019.8956381","url":null,"abstract":"The control of multiple robots in the context of tele-exploration tasks is often attentionally taxing, resulting in a loss of situational awareness for operators. Unmanned aerial vehicle swarms require significantly more multitasking than controlling a plane, thus making it necessary to devise intuitive feedback sources and control methods for these robots. The purpose of this article is to examine a swarm's nonverbal behaviour as a possible way to increase situational awareness and reduce the operators cognitive load by soliciting intuitions about the swarm's behaviour. To progress on the definition of a database of nonverbal expressions for robot swarms, we first define categories of communicative intents based on spontaneous descriptions of common swarm behaviours. The obtained typology confirms that the first two levels (as defined by Endsley: elements of environment and comprehension of the situation) can be shared through swarms motion-based communication. We then investigate group motion parameters potentially connected to these communicative intents. Results are that synchronized movement and tendency to form figures help convey meaningful information to the operator. We then discuss how this can be applied to realistic scenarios for the intuitive command of remote robotic teams.","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128128378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/RO-MAN46459.2019.8956401
Ashutosh Tiwari, S. Saxena, D. Joshi
This paper presents a wireless gait analysis system that incorporates anatomically located infrared (IR) distance sensor on the shoe for the measurement of various gait parameters such as foot-to-ground angle (FGA), foot clearance (FC). The system has been validated against the BTS bioengineering 3D motion capture gold standard method in gait analysis laboratory with FC RMSE error of 6.31% of the full range and FGA RMSE error of 5.53% of the full range. The squared correlation coefficient r2 for FC and FGA is equal to 0.970 and 0.935, respectively. This system has a sensor position adjustment mechanism in two degrees of freedom, which facilitates the adaptability of the system to any foot size. The system is inexpensive, simple to use, and provides accuracy at par to the existing systems. This system finds application in a variety of clinical domains, for example, neurological disease diagnosis affecting ambulation such as Parkinson’s and cerebral palsy, gait rehabilitation, and sports fields. The future scope of this work includes validation of the shoe with different foot sizes and with different walking speed.
{"title":"Instrumented shoe-based foot clearance and foot-to-ground angle measurement system for the gait analysis","authors":"Ashutosh Tiwari, S. Saxena, D. Joshi","doi":"10.1109/RO-MAN46459.2019.8956401","DOIUrl":"https://doi.org/10.1109/RO-MAN46459.2019.8956401","url":null,"abstract":"This paper presents a wireless gait analysis system that incorporates anatomically located infrared (IR) distance sensor on the shoe for the measurement of various gait parameters such as foot-to-ground angle (FGA), foot clearance (FC). The system has been validated against the BTS bioengineering 3D motion capture gold standard method in gait analysis laboratory with FC RMSE error of 6.31% of the full range and FGA RMSE error of 5.53% of the full range. The squared correlation coefficient r2 for FC and FGA is equal to 0.970 and 0.935, respectively. This system has a sensor position adjustment mechanism in two degrees of freedom, which facilitates the adaptability of the system to any foot size. The system is inexpensive, simple to use, and provides accuracy at par to the existing systems. This system finds application in a variety of clinical domains, for example, neurological disease diagnosis affecting ambulation such as Parkinson’s and cerebral palsy, gait rehabilitation, and sports fields. The future scope of this work includes validation of the shoe with different foot sizes and with different walking speed.","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134089771","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/RO-MAN46459.2019.8956351
K. Singh, Asokan Thondiyath
The design of a novel, soft bidirectional actuator which can improve the human-robot interactions in collaborative applications is proposed in this paper. This actuator is advantageous over the existing designs due to the additional degree of freedom for the same number of pressure inputs as found in the conventional designs. This improves the workspace of the bidirectional actuator significantly and is able to achieve higher angles of bidirectional bending at much lower values of input pressure. This is achieved by eliminating the passive impedance offered by one side of the bending chamber in compression when the other side of the chamber is inflated. A simple kinematic model of the actuator is presented and theoretical and finite element analysis is carried out to predict the fundamental behavior of the actuator. The results are validated through experiments using a fabricated model of the soft bidirectional bending actuator.
{"title":"Design and Analysis of a Soft Bidirectional Bending Actuator for Human-Robot Interaction Applications","authors":"K. Singh, Asokan Thondiyath","doi":"10.1109/RO-MAN46459.2019.8956351","DOIUrl":"https://doi.org/10.1109/RO-MAN46459.2019.8956351","url":null,"abstract":"The design of a novel, soft bidirectional actuator which can improve the human-robot interactions in collaborative applications is proposed in this paper. This actuator is advantageous over the existing designs due to the additional degree of freedom for the same number of pressure inputs as found in the conventional designs. This improves the workspace of the bidirectional actuator significantly and is able to achieve higher angles of bidirectional bending at much lower values of input pressure. This is achieved by eliminating the passive impedance offered by one side of the bending chamber in compression when the other side of the chamber is inflated. A simple kinematic model of the actuator is presented and theoretical and finite element analysis is carried out to predict the fundamental behavior of the actuator. The results are validated through experiments using a fabricated model of the soft bidirectional bending actuator.","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134494917","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/RO-MAN46459.2019.8956345
Ștefan-Dan Ciocîrlan, Roxana Agrigoroaie, A. Tapus
Trust is related to the performance of human teams, making it a significant characteristic, which needs to be analyzed inside human-robot teams. Trust was researched for a long time in other domains like social sciences, psychology, and economics. Building trust within a team is formed through common tasks and it depends on team performance and communication. By applying an online game based tasks for human-robot teams, the effects of three communication conditions (communication without text and verbal interaction, communication with text and verbal interaction related/not related to the task) on trust are analyzed. Additionally, we found that the participants’ background is linked to the trust in the interaction with the robot. The results show that in a human-robot team the human trust will increase more over time when he/she is working with a robot that uses text and verbal interaction communication related to the task. They further suggest that human trust will decrease to a lower extent when the robot fails in doing the tasks if it uses text and verbal communication with the human.
{"title":"Human-Robot Team: Effects of Communication in Analyzing Trust","authors":"Ștefan-Dan Ciocîrlan, Roxana Agrigoroaie, A. Tapus","doi":"10.1109/RO-MAN46459.2019.8956345","DOIUrl":"https://doi.org/10.1109/RO-MAN46459.2019.8956345","url":null,"abstract":"Trust is related to the performance of human teams, making it a significant characteristic, which needs to be analyzed inside human-robot teams. Trust was researched for a long time in other domains like social sciences, psychology, and economics. Building trust within a team is formed through common tasks and it depends on team performance and communication. By applying an online game based tasks for human-robot teams, the effects of three communication conditions (communication without text and verbal interaction, communication with text and verbal interaction related/not related to the task) on trust are analyzed. Additionally, we found that the participants’ background is linked to the trust in the interaction with the robot. The results show that in a human-robot team the human trust will increase more over time when he/she is working with a robot that uses text and verbal interaction communication related to the task. They further suggest that human trust will decrease to a lower extent when the robot fails in doing the tasks if it uses text and verbal communication with the human.","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133270395","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}