Pub Date : 2019-10-01DOI: 10.1109/RO-MAN46459.2019.8956240
Azumi Ueno, Kotaro Hayashi, I. Mizuuchi
Even if a robot is not designed with a specific impression, if there is a means that can add an impression later to the robot, it will be useful for social robot design, we considered. In particular, anthropomorphism seems to be an important impression of designing social interaction between humans and robots. In the movie, ”STAR WARS,” there is a non-humanoid robot, called R2-D2, which communicates mainly by sounds. A humanoid interpreter robot, called C-3PO, responds to the sound of R2-D2 with natural language and gesture. And the audience finds the personality in R2-D2 richer than the personality which is based on the information which R2-D2’s sounds have. It might be possible to change the impression of a non-humanoid robot emitting simple sounds by communication with a humanoid robot that speaks a natural language and make gestures. We conducted an impression evaluation experiment. In the condition where robots are interacting, the observer evaluated anthropomorphism of the nonhumanoid robot more than in the non-interacting condition. There were also some other impressions that have changed.
{"title":"Impression Change on Nonverbal Non-Humanoid Robot by Interaction with Humanoid Robot","authors":"Azumi Ueno, Kotaro Hayashi, I. Mizuuchi","doi":"10.1109/RO-MAN46459.2019.8956240","DOIUrl":"https://doi.org/10.1109/RO-MAN46459.2019.8956240","url":null,"abstract":"Even if a robot is not designed with a specific impression, if there is a means that can add an impression later to the robot, it will be useful for social robot design, we considered. In particular, anthropomorphism seems to be an important impression of designing social interaction between humans and robots. In the movie, ”STAR WARS,” there is a non-humanoid robot, called R2-D2, which communicates mainly by sounds. A humanoid interpreter robot, called C-3PO, responds to the sound of R2-D2 with natural language and gesture. And the audience finds the personality in R2-D2 richer than the personality which is based on the information which R2-D2’s sounds have. It might be possible to change the impression of a non-humanoid robot emitting simple sounds by communication with a humanoid robot that speaks a natural language and make gestures. We conducted an impression evaluation experiment. In the condition where robots are interacting, the observer evaluated anthropomorphism of the nonhumanoid robot more than in the non-interacting condition. There were also some other impressions that have changed.","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121458242","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/RO-MAN46459.2019.8956416
Lorenzo Cominelli, Roberto Garofalo, D. Rossi
In this paper, we discuss some evidences provided by neuroscience and psychology studies on human time perception, in terms of its representation and its psychological distortion due to emotional state variations. We propose, then, a novel model inspired by these recent findings to be applied in social robotics control architectures, with a specific reference to an existing and already tested bio-inspired cognitive architecture called SEAI (Social Emotional Artificial Intelligence). An hypothesis on how to represent emotional state influence on time perception in SEAI will be presented, discussing the consequent potential of the system with this integrated feature.
{"title":"The Influence of Emotions on Time Perception in a Cognitive System for Social Robotics","authors":"Lorenzo Cominelli, Roberto Garofalo, D. Rossi","doi":"10.1109/RO-MAN46459.2019.8956416","DOIUrl":"https://doi.org/10.1109/RO-MAN46459.2019.8956416","url":null,"abstract":"In this paper, we discuss some evidences provided by neuroscience and psychology studies on human time perception, in terms of its representation and its psychological distortion due to emotional state variations. We propose, then, a novel model inspired by these recent findings to be applied in social robotics control architectures, with a specific reference to an existing and already tested bio-inspired cognitive architecture called SEAI (Social Emotional Artificial Intelligence). An hypothesis on how to represent emotional state influence on time perception in SEAI will be presented, discussing the consequent potential of the system with this integrated feature.","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"258 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123321289","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/RO-MAN46459.2019.8956391
Neha Singh, Puneesh Deora, P. M. Pradhan
Time-frequency (TF) analysis through well-known TF tool namely S-transform (ST) has been extensively used for QRS detection in Electrocardiogram (ECG) signals. However, Gaussian window-based conventional ST suffers from poor TF resolution due to the fixed scaling criterion and the long taper of the Gaussian window. Many variants of ST using different scaling criteria have been reported in literature for improving the accuracy in the detection of QRS complexes. This paper presents the usefulness of zero-order prolate spheroidal wave function (PSWF) as a window kernel in ST. PSWF has ability to concentrate maximum energy in narrow and finite time and frequency intervals, and provides more flexibility in changing window characteristics. Synchrosqueezing transform is a post processing method that improves the energy concentration in a TFR remarkably. This paper proposes a PSWF-based synchrosqueezing ST for detection of R peaks in ECG signals. The results show that the proposed method accurately detects R peaks with a sensitivity, positive predictivity and accuracy of 99.96 %, 99. 96% and 99. 92% respectively. It also improves upon on existing techniques in terms of the aforementioned metrics and the search back range.
{"title":"Simultaneously Concentrated PSWF-based Synchrosqueezing S-transform and its application to R peak detection in ECG signal","authors":"Neha Singh, Puneesh Deora, P. M. Pradhan","doi":"10.1109/RO-MAN46459.2019.8956391","DOIUrl":"https://doi.org/10.1109/RO-MAN46459.2019.8956391","url":null,"abstract":"Time-frequency (TF) analysis through well-known TF tool namely S-transform (ST) has been extensively used for QRS detection in Electrocardiogram (ECG) signals. However, Gaussian window-based conventional ST suffers from poor TF resolution due to the fixed scaling criterion and the long taper of the Gaussian window. Many variants of ST using different scaling criteria have been reported in literature for improving the accuracy in the detection of QRS complexes. This paper presents the usefulness of zero-order prolate spheroidal wave function (PSWF) as a window kernel in ST. PSWF has ability to concentrate maximum energy in narrow and finite time and frequency intervals, and provides more flexibility in changing window characteristics. Synchrosqueezing transform is a post processing method that improves the energy concentration in a TFR remarkably. This paper proposes a PSWF-based synchrosqueezing ST for detection of R peaks in ECG signals. The results show that the proposed method accurately detects R peaks with a sensitivity, positive predictivity and accuracy of 99.96 %, 99. 96% and 99. 92% respectively. It also improves upon on existing techniques in terms of the aforementioned metrics and the search back range.","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"1 7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128781605","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/RO-MAN46459.2019.8956461
Meg Tonkin, Jonathan Vitale, S. Herse, S. Raza, Srinivas Madhisetty, Le Kang, The Duc Vu, B. Johnston, Mary-Anne Williams
Deploying social robots applications in public spaces for conducting in the wild studies is a significant challenge but critical to the advancement of social robotics. Real world environments are complex, dynamic, and uncertain. Human-Robot interactions can be unstructured and unanticipated. In addition, when the robot is intended to be a shared public resource, management issues such as user access and user privacy arise, leading to design choices that can impact on users’ trust and the adoption of the designed system. In this paper we propose a user registration and login system for a social robot and report on people’s preferences when registering their personal details with the robot to access services. This study is the first iteration of a larger body of work investigating potential use cases for the Pepper social robot at a government managed centre for startups and innovation. We prototyped and deployed a system for user registration with the robot, which gives users control over registering and accessing services with either face recognition technology or a QR code. The QR code played a critical role in increasing the number of users adopting the technology. We discuss the need to develop social robot applications that responsibly adhere to privacy principles, are inclusive, and cater for a broad spectrum of people.
{"title":"Privacy First: Designing Responsible and Inclusive Social Robot Applications for in the Wild Studies","authors":"Meg Tonkin, Jonathan Vitale, S. Herse, S. Raza, Srinivas Madhisetty, Le Kang, The Duc Vu, B. Johnston, Mary-Anne Williams","doi":"10.1109/RO-MAN46459.2019.8956461","DOIUrl":"https://doi.org/10.1109/RO-MAN46459.2019.8956461","url":null,"abstract":"Deploying social robots applications in public spaces for conducting in the wild studies is a significant challenge but critical to the advancement of social robotics. Real world environments are complex, dynamic, and uncertain. Human-Robot interactions can be unstructured and unanticipated. In addition, when the robot is intended to be a shared public resource, management issues such as user access and user privacy arise, leading to design choices that can impact on users’ trust and the adoption of the designed system. In this paper we propose a user registration and login system for a social robot and report on people’s preferences when registering their personal details with the robot to access services. This study is the first iteration of a larger body of work investigating potential use cases for the Pepper social robot at a government managed centre for startups and innovation. We prototyped and deployed a system for user registration with the robot, which gives users control over registering and accessing services with either face recognition technology or a QR code. The QR code played a critical role in increasing the number of users adopting the technology. We discuss the need to develop social robot applications that responsibly adhere to privacy principles, are inclusive, and cater for a broad spectrum of people.","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130904308","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/RO-MAN46459.2019.8956302
Archit Krishna Kamath, Vibhu Kumar Tripathi, Subhash Chand Yogi, L. Behera
This paper proposes a vision-based sliding mode control technique for autonomous landing of a quadrotor over the static platform. The proposed vision algorithm estimates the quadrotor’s position relative to an ArUco marker placed on a static platform using an on-board monocular camera. The relative position is provided as an input to a Fast-terminal Sliding Mode Super Twisting Controller (FTSMSTC) which ensures finite time convergence of the relative position between the landing pad marker and the quadrotor. In addition, the proposed controller attenuates chattering phenomena and guarantees robustness towards bounded external disturbances and modelling uncertainties. The proposed vision-based control scheme is implemented using numerical simulations and validated in real-time on the DJI Matrice 100.
{"title":"Vision-based Fast-terminal Sliding Mode Super Twisting Controller for Autonomous Landing of a Quadrotor on a Static Platform","authors":"Archit Krishna Kamath, Vibhu Kumar Tripathi, Subhash Chand Yogi, L. Behera","doi":"10.1109/RO-MAN46459.2019.8956302","DOIUrl":"https://doi.org/10.1109/RO-MAN46459.2019.8956302","url":null,"abstract":"This paper proposes a vision-based sliding mode control technique for autonomous landing of a quadrotor over the static platform. The proposed vision algorithm estimates the quadrotor’s position relative to an ArUco marker placed on a static platform using an on-board monocular camera. The relative position is provided as an input to a Fast-terminal Sliding Mode Super Twisting Controller (FTSMSTC) which ensures finite time convergence of the relative position between the landing pad marker and the quadrotor. In addition, the proposed controller attenuates chattering phenomena and guarantees robustness towards bounded external disturbances and modelling uncertainties. The proposed vision-based control scheme is implemented using numerical simulations and validated in real-time on the DJI Matrice 100.","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128274980","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/RO-MAN46459.2019.8956381
Florent Levillain, D. St-Onge, G. Beltrame, E. Zibetti
The control of multiple robots in the context of tele-exploration tasks is often attentionally taxing, resulting in a loss of situational awareness for operators. Unmanned aerial vehicle swarms require significantly more multitasking than controlling a plane, thus making it necessary to devise intuitive feedback sources and control methods for these robots. The purpose of this article is to examine a swarm's nonverbal behaviour as a possible way to increase situational awareness and reduce the operators cognitive load by soliciting intuitions about the swarm's behaviour. To progress on the definition of a database of nonverbal expressions for robot swarms, we first define categories of communicative intents based on spontaneous descriptions of common swarm behaviours. The obtained typology confirms that the first two levels (as defined by Endsley: elements of environment and comprehension of the situation) can be shared through swarms motion-based communication. We then investigate group motion parameters potentially connected to these communicative intents. Results are that synchronized movement and tendency to form figures help convey meaningful information to the operator. We then discuss how this can be applied to realistic scenarios for the intuitive command of remote robotic teams.
{"title":"Towards situational awareness from robotic group motion","authors":"Florent Levillain, D. St-Onge, G. Beltrame, E. Zibetti","doi":"10.1109/RO-MAN46459.2019.8956381","DOIUrl":"https://doi.org/10.1109/RO-MAN46459.2019.8956381","url":null,"abstract":"The control of multiple robots in the context of tele-exploration tasks is often attentionally taxing, resulting in a loss of situational awareness for operators. Unmanned aerial vehicle swarms require significantly more multitasking than controlling a plane, thus making it necessary to devise intuitive feedback sources and control methods for these robots. The purpose of this article is to examine a swarm's nonverbal behaviour as a possible way to increase situational awareness and reduce the operators cognitive load by soliciting intuitions about the swarm's behaviour. To progress on the definition of a database of nonverbal expressions for robot swarms, we first define categories of communicative intents based on spontaneous descriptions of common swarm behaviours. The obtained typology confirms that the first two levels (as defined by Endsley: elements of environment and comprehension of the situation) can be shared through swarms motion-based communication. We then investigate group motion parameters potentially connected to these communicative intents. Results are that synchronized movement and tendency to form figures help convey meaningful information to the operator. We then discuss how this can be applied to realistic scenarios for the intuitive command of remote robotic teams.","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128128378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/RO-MAN46459.2019.8956365
Subhash Chand Yogi, Vibhu Kumar Tripathi, Archit Krishna Kamath, L. Behera
This paper demonstrates an hybrid methodology of quadrotor navigation and control in an environment with obstacles by combining a Q-learning strategy for navigation with a non-linear sliding mode control scheme for position and altitude control of the quadrotor. In an unknown environment, an optimal safe path is estimated using the Q-learning scheme by considering the environment as a 3D grid world. Furthermore, a non-singular terminal sliding mode control (NTSMC) is employed to navigate the quadrotor through the planned trajectories. The NTSMC that is employed for trajectory tracking ensures robustness towards bounded disturbances as well as parametric uncertainties. In addition, it ensures finite time convergence of the tracking error and avoids issues that arise due to singularities in the dynamics. The effectiveness of the proposed navigation and control scheme are validated using numerical simulations wherein a quadrotor is required to pass through a window.
{"title":"Q-learning Based Navigation of a Quadrotor using Non-singular Terminal Sliding Mode Control","authors":"Subhash Chand Yogi, Vibhu Kumar Tripathi, Archit Krishna Kamath, L. Behera","doi":"10.1109/RO-MAN46459.2019.8956365","DOIUrl":"https://doi.org/10.1109/RO-MAN46459.2019.8956365","url":null,"abstract":"This paper demonstrates an hybrid methodology of quadrotor navigation and control in an environment with obstacles by combining a Q-learning strategy for navigation with a non-linear sliding mode control scheme for position and altitude control of the quadrotor. In an unknown environment, an optimal safe path is estimated using the Q-learning scheme by considering the environment as a 3D grid world. Furthermore, a non-singular terminal sliding mode control (NTSMC) is employed to navigate the quadrotor through the planned trajectories. The NTSMC that is employed for trajectory tracking ensures robustness towards bounded disturbances as well as parametric uncertainties. In addition, it ensures finite time convergence of the tracking error and avoids issues that arise due to singularities in the dynamics. The effectiveness of the proposed navigation and control scheme are validated using numerical simulations wherein a quadrotor is required to pass through a window.","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126393685","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/RO-MAN46459.2019.8956320
Pamela Carreno-Medrano, Abhinav Dahiya, Stephen L. Smith, D. Kulić
Estimating a user’s expertise level based on observations of their actions will result in better human-robot collaboration, by enabling the robot to adjust its behaviour and the assistance it provides according to the skills of the particular user it’s interacting with. This paper details an approach to incrementally and continually estimate the expertise of a user whose goal is to optimally complete a given task. The user’s expertise level, here represented as a scalar parameter, is estimated by evaluating how far their actions are from optimal. The proposed approach was tested using data from an online study where participants were asked to complete various instances of a simulated kitting task. An optimal planner was used to estimate the “goodness” of all available actions at any given task state. We found that our expertise level estimates correlate strongly with observed after-task performance metrics and that it is possible to differentiate novices from experts after observing, on average, 33% of the errors made by the novices.
{"title":"Incremental Estimation of Users’ Expertise Level","authors":"Pamela Carreno-Medrano, Abhinav Dahiya, Stephen L. Smith, D. Kulić","doi":"10.1109/RO-MAN46459.2019.8956320","DOIUrl":"https://doi.org/10.1109/RO-MAN46459.2019.8956320","url":null,"abstract":"Estimating a user’s expertise level based on observations of their actions will result in better human-robot collaboration, by enabling the robot to adjust its behaviour and the assistance it provides according to the skills of the particular user it’s interacting with. This paper details an approach to incrementally and continually estimate the expertise of a user whose goal is to optimally complete a given task. The user’s expertise level, here represented as a scalar parameter, is estimated by evaluating how far their actions are from optimal. The proposed approach was tested using data from an online study where participants were asked to complete various instances of a simulated kitting task. An optimal planner was used to estimate the “goodness” of all available actions at any given task state. We found that our expertise level estimates correlate strongly with observed after-task performance metrics and that it is possible to differentiate novices from experts after observing, on average, 33% of the errors made by the novices.","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121562723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/RO-MAN46459.2019.8956248
N. Buchina, P. Sterkenburg, T. Lourens, E. Barakova
Previous research has shown that robot-mediated therapy may be effective in improving different mental or physical conditions, but this effectiveness strongly depends on how well the therapy can be translated to robot training. The goal of this study is to assist the end-users such as occupational and rehabilitation therapists to create without help of technical professional therapy-specific and sensory-enabled scenarios for the robotic assistant for use in an unstructured environment. The Cognitive Dimension of Notations framework was applied to assess the usability of the programming interface and the Cyclomatic complexity method was used to evaluate the complexity of the created robot scenarios. Eleven therapists with a mean age of 39 years working in the care for persons with visual-and-intellectual disabilities participated. The results show good usability of the interface, as measured via the CDN framework and the cyclomatic complexity analysis showed an increased complexity of the created by the occupational and rehabilitation therapist’s scenarios. The participants did not request for very specifically defined behaviors for the robot, and therefore descriptions in natural text can be successfully used for robot programming.
{"title":"Natural language interface for programming sensory-enabled scenarios for human-robot interaction","authors":"N. Buchina, P. Sterkenburg, T. Lourens, E. Barakova","doi":"10.1109/RO-MAN46459.2019.8956248","DOIUrl":"https://doi.org/10.1109/RO-MAN46459.2019.8956248","url":null,"abstract":"Previous research has shown that robot-mediated therapy may be effective in improving different mental or physical conditions, but this effectiveness strongly depends on how well the therapy can be translated to robot training. The goal of this study is to assist the end-users such as occupational and rehabilitation therapists to create without help of technical professional therapy-specific and sensory-enabled scenarios for the robotic assistant for use in an unstructured environment. The Cognitive Dimension of Notations framework was applied to assess the usability of the programming interface and the Cyclomatic complexity method was used to evaluate the complexity of the created robot scenarios. Eleven therapists with a mean age of 39 years working in the care for persons with visual-and-intellectual disabilities participated. The results show good usability of the interface, as measured via the CDN framework and the cyclomatic complexity analysis showed an increased complexity of the created by the occupational and rehabilitation therapist’s scenarios. The participants did not request for very specifically defined behaviors for the robot, and therefore descriptions in natural text can be successfully used for robot programming.","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"210 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122621673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/RO-MAN46459.2019.8956417
Carlo La Viola, Andrea Orlandini, A. Umbrico, A. Cesta
This paper presents a novel comprehensive framework called ROS-TiPlEx (Timeline-based Planning and Execution with ROS) to provide a shared environment in which experts in robotics and planning can easily interact to, respectively, encode information about low-level robot control and define task planning and execution models. ROS-TiPlEx aims at facilitating the interaction between both kind of experts, thus, enhancing and possibly speeding up the process of an integrated control design. ROS-TiPlEx is the first tool addressing the connection of ROS and timeline-based planning.
{"title":"ROS-TiPlEx: How to make experts in A.I. Planning and Robotics talk together and be happy","authors":"Carlo La Viola, Andrea Orlandini, A. Umbrico, A. Cesta","doi":"10.1109/RO-MAN46459.2019.8956417","DOIUrl":"https://doi.org/10.1109/RO-MAN46459.2019.8956417","url":null,"abstract":"This paper presents a novel comprehensive framework called ROS-TiPlEx (Timeline-based Planning and Execution with ROS) to provide a shared environment in which experts in robotics and planning can easily interact to, respectively, encode information about low-level robot control and define task planning and execution models. ROS-TiPlEx aims at facilitating the interaction between both kind of experts, thus, enhancing and possibly speeding up the process of an integrated control design. ROS-TiPlEx is the first tool addressing the connection of ROS and timeline-based planning.","PeriodicalId":286478,"journal":{"name":"2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"82 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126133484","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}