Pub Date : 2021-08-08DOI: 10.1109/RO-MAN50785.2021.9515416
N. DePalma, J. Hodgins
Current models of gesture synthesis focus primarily on a speech signal to synthesize gestures. In this paper, we take a critical look at this approach from the point of view of gesture’s tendency to disambiguate the verbal component of the expression. We identify and contribute an analysis of three challenge factors for these models: 1) synthesizing gesture in the presence of ambiguous utterances seems to be a overwhelmingly useful case for gesture production yet is not at present supported by present day models of gesture generation, 2) finding the best f-formation to convey spatial gestural information like gesturing directions makes a significant difference for everyday users and must be taken into account, and 3) assuming that captured human motion is a plentiful and easy source for retargeting gestural motion may not yet take into account the readability of gestures under kinematically constrained feasibility spaces.Recent approaches to generate gesture for agents[1] and robots [2] treat gesture as co-speech that is strictly dependent on verbal utterances. Evidence suggests that gesture selection may leverage task context so it is not dependent on verbal utterance only. This effect is particularly evident when attempting to generate gestures from ambiguous verbal utterances (e.g. "You do this when you get to the fork in the road"). Decoupling this strict dependency may allow gesture to be synthesized for the purpose of clarification of the ambiguous verbal utterance.
{"title":"Factor exploration of gestural stroke choice in the context of ambiguous instruction utterances: challenges to synthesizing semantic gesture from speech alone","authors":"N. DePalma, J. Hodgins","doi":"10.1109/RO-MAN50785.2021.9515416","DOIUrl":"https://doi.org/10.1109/RO-MAN50785.2021.9515416","url":null,"abstract":"Current models of gesture synthesis focus primarily on a speech signal to synthesize gestures. In this paper, we take a critical look at this approach from the point of view of gesture’s tendency to disambiguate the verbal component of the expression. We identify and contribute an analysis of three challenge factors for these models: 1) synthesizing gesture in the presence of ambiguous utterances seems to be a overwhelmingly useful case for gesture production yet is not at present supported by present day models of gesture generation, 2) finding the best f-formation to convey spatial gestural information like gesturing directions makes a significant difference for everyday users and must be taken into account, and 3) assuming that captured human motion is a plentiful and easy source for retargeting gestural motion may not yet take into account the readability of gestures under kinematically constrained feasibility spaces.Recent approaches to generate gesture for agents[1] and robots [2] treat gesture as co-speech that is strictly dependent on verbal utterances. Evidence suggests that gesture selection may leverage task context so it is not dependent on verbal utterance only. This effect is particularly evident when attempting to generate gestures from ambiguous verbal utterances (e.g. \"You do this when you get to the fork in the road\"). Decoupling this strict dependency may allow gesture to be synthesized for the purpose of clarification of the ambiguous verbal utterance.","PeriodicalId":6854,"journal":{"name":"2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN)","volume":"9 1","pages":"102-109"},"PeriodicalIF":0.0,"publicationDate":"2021-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75514707","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-08DOI: 10.1109/RO-MAN50785.2021.9515351
Aparajita Chowdhury, Aino Ahtinen, R. Pieters, Kaisa Väänänen
The integration of collaborative robots (cobots) is changing manufacturing and production processes in factories. When cobots are designed to be efficient, skillful and safe to interact with, workers can collaborate with them conveniently. As workers often work with cobots intensively, it is crucial to explore the user experience (UX) of cobots. The goal of our research is to explore how factory cobots could be used in ways that support pleasurable worker experiences. We adapted "research through design" (RtD) to conduct exploratory research on novel interactions related to affectiveness, playfulness and relatedness in human-robot collaboration (HRC) using collaborative robot arm, Panda. RtD is a method that utilizes methods and practices of design to produce new knowledge. We conducted an exploratory study with 33 participants to evaluate three HRC storyboards scenarios in two complementary remote workshops. The findings report suitability of affective and playful behavior of cobots in an industrial setting. In addition, we deduced that personality of the robot plays a crutial role in HRC.
{"title":"\"How are you today, Panda the Robot?\" – Affectiveness, Playfulness and Relatedness in Human-Robot Collaboration in the Factory Context","authors":"Aparajita Chowdhury, Aino Ahtinen, R. Pieters, Kaisa Väänänen","doi":"10.1109/RO-MAN50785.2021.9515351","DOIUrl":"https://doi.org/10.1109/RO-MAN50785.2021.9515351","url":null,"abstract":"The integration of collaborative robots (cobots) is changing manufacturing and production processes in factories. When cobots are designed to be efficient, skillful and safe to interact with, workers can collaborate with them conveniently. As workers often work with cobots intensively, it is crucial to explore the user experience (UX) of cobots. The goal of our research is to explore how factory cobots could be used in ways that support pleasurable worker experiences. We adapted \"research through design\" (RtD) to conduct exploratory research on novel interactions related to affectiveness, playfulness and relatedness in human-robot collaboration (HRC) using collaborative robot arm, Panda. RtD is a method that utilizes methods and practices of design to produce new knowledge. We conducted an exploratory study with 33 participants to evaluate three HRC storyboards scenarios in two complementary remote workshops. The findings report suitability of affective and playful behavior of cobots in an industrial setting. In addition, we deduced that personality of the robot plays a crutial role in HRC.","PeriodicalId":6854,"journal":{"name":"2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN)","volume":"34 1","pages":"1089-1096"},"PeriodicalIF":0.0,"publicationDate":"2021-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75809286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-08DOI: 10.1109/RO-MAN50785.2021.9515512
Hossein Karami, A. Carfì, F. Mastrogiovanni
In this work, we present a framework for human-robot collaboration allowing the human operator to alter the robot plan execution online. To achieve this goal, we introduce Branched AND/OR graphs, an extension to AND/OR graphs, to manage flexible and adaptable human-robot collaboration. In our study, the operator can alter the plan execution using two implementations of Branched AND/OR graphs for learning by demonstration, using kinesthetic teaching, and task repetition. Finally, we demonstrated the effectiveness of our framework in a defect spotting scenario where the operator supervises robot operations and modifies online the plan when necessary.
{"title":"Branched AND/OR Graphs: Toward Flexible and Adaptable Human-Robot Collaboration","authors":"Hossein Karami, A. Carfì, F. Mastrogiovanni","doi":"10.1109/RO-MAN50785.2021.9515512","DOIUrl":"https://doi.org/10.1109/RO-MAN50785.2021.9515512","url":null,"abstract":"In this work, we present a framework for human-robot collaboration allowing the human operator to alter the robot plan execution online. To achieve this goal, we introduce Branched AND/OR graphs, an extension to AND/OR graphs, to manage flexible and adaptable human-robot collaboration. In our study, the operator can alter the plan execution using two implementations of Branched AND/OR graphs for learning by demonstration, using kinesthetic teaching, and task repetition. Finally, we demonstrated the effectiveness of our framework in a defect spotting scenario where the operator supervises robot operations and modifies online the plan when necessary.","PeriodicalId":6854,"journal":{"name":"2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN)","volume":"143 1","pages":"527-533"},"PeriodicalIF":0.0,"publicationDate":"2021-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76408818","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-08DOI: 10.1109/RO-MAN50785.2021.9515456
Bhivraj Suthar, Y. Choi, Seul Jung
This paper presents a new concept of supplementary arms for the collaborative task with a worker during a thin panel installation on the wall. Foldable robot arms for holding and installation (FRAHI) using scissor structure have been proposed as a new concept of the human-robot collaboration. The foldable and wearable FRAHI arm has been designed to influence cooperativeness and worker safety. FRAHI arms are developed and integrated with a worker's safety suit, and a soft and flexible waist belt is introduced for the arms and worker interface, comfortable to the user while working. The design principle of FRAHI is based on folding capability, gravity compensation for the cooperative thin-panel installation with workers. The experimental demonstration of a thin panel installation on the wall has been performed for the feasibility of the supplementary arm for worker assistance and cooperativeness in construction works.
{"title":"Design and Experimental Evaluation of Foldable Robot Arms for a Holding and Installation Work: FRAHI","authors":"Bhivraj Suthar, Y. Choi, Seul Jung","doi":"10.1109/RO-MAN50785.2021.9515456","DOIUrl":"https://doi.org/10.1109/RO-MAN50785.2021.9515456","url":null,"abstract":"This paper presents a new concept of supplementary arms for the collaborative task with a worker during a thin panel installation on the wall. Foldable robot arms for holding and installation (FRAHI) using scissor structure have been proposed as a new concept of the human-robot collaboration. The foldable and wearable FRAHI arm has been designed to influence cooperativeness and worker safety. FRAHI arms are developed and integrated with a worker's safety suit, and a soft and flexible waist belt is introduced for the arms and worker interface, comfortable to the user while working. The design principle of FRAHI is based on folding capability, gravity compensation for the cooperative thin-panel installation with workers. The experimental demonstration of a thin panel installation on the wall has been performed for the feasibility of the supplementary arm for worker assistance and cooperativeness in construction works.","PeriodicalId":6854,"journal":{"name":"2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN)","volume":"1 1","pages":"899-904"},"PeriodicalIF":0.0,"publicationDate":"2021-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79775892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-08DOI: 10.1109/RO-MAN50785.2021.9515538
Bjarke Kristian Maigaard Kjær Pedersen, Bente Charlotte Weigelin, J. C. Larsen, Jacob Nielsen
With this paper we present the first – to our knowledge – systematic review on how to use Educational Robotics to foster girls’ interest in STEM. This is a research area essential to broadening the participation across the genders in the much-needed STEM workforce, whose size is currently held back by a significant gender disparity. In the review, 13 (quasi-)experimental studies were selected for synthesis, from a total of 1093 results found across multiple search queries applied to six scientific databases. When synthesizing the results and findings from the included studies, four major categories of research interest were identified. On the basis hereof, a list of recommendations, which are readily implementable in most curriculums for both compulsory education and extracurricular activities, was established. The recommendations revolve around: The choice of technology, applied contextualization, approaches to teaching, and gender compositions. In addition, we discuss the current extent of research on the topic, which shows indications of becoming more active in recent years, while likewise discussing the reviews implications for future research directions.
{"title":"Using educational robotics to foster girls’ interest in STEM: A systematic review","authors":"Bjarke Kristian Maigaard Kjær Pedersen, Bente Charlotte Weigelin, J. C. Larsen, Jacob Nielsen","doi":"10.1109/RO-MAN50785.2021.9515538","DOIUrl":"https://doi.org/10.1109/RO-MAN50785.2021.9515538","url":null,"abstract":"With this paper we present the first – to our knowledge – systematic review on how to use Educational Robotics to foster girls’ interest in STEM. This is a research area essential to broadening the participation across the genders in the much-needed STEM workforce, whose size is currently held back by a significant gender disparity. In the review, 13 (quasi-)experimental studies were selected for synthesis, from a total of 1093 results found across multiple search queries applied to six scientific databases. When synthesizing the results and findings from the included studies, four major categories of research interest were identified. On the basis hereof, a list of recommendations, which are readily implementable in most curriculums for both compulsory education and extracurricular activities, was established. The recommendations revolve around: The choice of technology, applied contextualization, approaches to teaching, and gender compositions. In addition, we discuss the current extent of research on the topic, which shows indications of becoming more active in recent years, while likewise discussing the reviews implications for future research directions.","PeriodicalId":6854,"journal":{"name":"2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN)","volume":"45 1","pages":"865-872"},"PeriodicalIF":0.0,"publicationDate":"2021-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76995994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-08DOI: 10.1109/RO-MAN50785.2021.9515486
Vincent Fortineau, M. Makarov, P. Rodríguez-Ayerbe, I. Siegler
The estimation of the human endpoint impedance interacting with a physical environment provides modelling insights both for the field of human movement science and for the design of innovative controllers for collaborative robotics based on physical human-robot interaction. Most of the existing human impedance estimation methods described in the literature rely on controlled environments closer to a laboratory than to an industrial setting. In this paper, a force perturbation method is proposed without any specific requirements on neither the force nor the position trajectories and without additional sensors placed on the human. The method is illustrated through an experimental study on a benchmark interaction task. The task was selected to be sufficiently variable for preventing the use of average trajectories and the perturbations used for the estimations are sufficiently low amplitude and short to not deteriorate the human performance in the task. The obtained impedance model parameters are in adequacy with the ones found in the literature, suggesting the validity of the proposed approach. The low constraints on the experimental settings make it applicable even out of the lab, for example, in human-robot collaboration in manufacturing environments, to adjust the robot behaviour to changes in the state of the operator (fatigue, stress) or to adjust to a variety of human operators with different interaction strategies.
{"title":"Towards a seamless experimental protocol for human arm impedance estimation in an interactive dynamic task","authors":"Vincent Fortineau, M. Makarov, P. Rodríguez-Ayerbe, I. Siegler","doi":"10.1109/RO-MAN50785.2021.9515486","DOIUrl":"https://doi.org/10.1109/RO-MAN50785.2021.9515486","url":null,"abstract":"The estimation of the human endpoint impedance interacting with a physical environment provides modelling insights both for the field of human movement science and for the design of innovative controllers for collaborative robotics based on physical human-robot interaction. Most of the existing human impedance estimation methods described in the literature rely on controlled environments closer to a laboratory than to an industrial setting. In this paper, a force perturbation method is proposed without any specific requirements on neither the force nor the position trajectories and without additional sensors placed on the human. The method is illustrated through an experimental study on a benchmark interaction task. The task was selected to be sufficiently variable for preventing the use of average trajectories and the perturbations used for the estimations are sufficiently low amplitude and short to not deteriorate the human performance in the task. The obtained impedance model parameters are in adequacy with the ones found in the literature, suggesting the validity of the proposed approach. The low constraints on the experimental settings make it applicable even out of the lab, for example, in human-robot collaboration in manufacturing environments, to adjust the robot behaviour to changes in the state of the operator (fatigue, stress) or to adjust to a variety of human operators with different interaction strategies.","PeriodicalId":6854,"journal":{"name":"2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN)","volume":"23 1","pages":"31-36"},"PeriodicalIF":0.0,"publicationDate":"2021-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87293197","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-08DOI: 10.1109/RO-MAN50785.2021.9515365
Connor Esterwood, L. Robert
Trust is vital to promoting human and robot collaboration, but like human teammates, robots make mistakes that undermine trust. As a result, a human’s perception of his or her robot teammate’s trustworthiness can dramatically decrease [1], [2], [3], [4]. Trustworthiness consists of three distinct dimensions: ability (i.e. competency), benevolence (i.e. concern for the trustor) and integrity (i.e. honesty) [5], [6]. Taken together, decreases in trustworthiness decreases trust in the robot [7]. To address this, we conducted a 2 (high vs. low anthropomorphism) x 4 (trust repair strategies) between-subjects experiment. Preliminary results of the first 164 participants (between 19 and 24 per cell) highlight which repair strategies are effective relative to ability, integrity and benevolence and the robot’s anthropomorphism. Overall, this paper contributes to the HRI trust repair literature.
{"title":"Do You Still Trust Me? Human-Robot Trust Repair Strategies","authors":"Connor Esterwood, L. Robert","doi":"10.1109/RO-MAN50785.2021.9515365","DOIUrl":"https://doi.org/10.1109/RO-MAN50785.2021.9515365","url":null,"abstract":"Trust is vital to promoting human and robot collaboration, but like human teammates, robots make mistakes that undermine trust. As a result, a human’s perception of his or her robot teammate’s trustworthiness can dramatically decrease [1], [2], [3], [4]. Trustworthiness consists of three distinct dimensions: ability (i.e. competency), benevolence (i.e. concern for the trustor) and integrity (i.e. honesty) [5], [6]. Taken together, decreases in trustworthiness decreases trust in the robot [7]. To address this, we conducted a 2 (high vs. low anthropomorphism) x 4 (trust repair strategies) between-subjects experiment. Preliminary results of the first 164 participants (between 19 and 24 per cell) highlight which repair strategies are effective relative to ability, integrity and benevolence and the robot’s anthropomorphism. Overall, this paper contributes to the HRI trust repair literature.","PeriodicalId":6854,"journal":{"name":"2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN)","volume":"8 1","pages":"183-188"},"PeriodicalIF":0.0,"publicationDate":"2021-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91333040","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-08DOI: 10.1109/RO-MAN50785.2021.9515526
Yue (Sophie) Guo, Rohit Jena, Dana Hughes, M. Lewis, K. Sycara
To build an agent providing assistance to human rescuers in an urban search and rescue task, it is crucial to understand not only human actions but also human beliefs that may influence the decision to take these actions. Developing data-driven models to predict a rescuer’s strategies for navigating the environment and triaging victims requires costly data collection and training for each new environment of interest. Transfer learning approaches can be used to mitigate this challenge, allowing a model trained on a source environment/task to generalize to a previously unseen target environment/task with few training examples. In this paper, we investigate transfer learning (a) from a source environment with smaller number of types of injured victims to one with larger number of victim injury classes and (b) from a smaller and simpler environment to a larger and more complex one for navigation strategy. Inspired by hierarchical organization of human spatial cognition, we used graph division to represent spatial knowledge, and Transfer Learning Diffusion Convo-lutional Recurrent Neural Network (TL-DCRNN), a spatial and temporal graph-based recurrent neural network suitable for transfer learning, to predict navigation. To abstract the rescue strategy from a rescuer’s field-of-view stream, we used attention-based LSTM networks. We experimented on various transfer learning scenarios and evaluated the performance using mean average error. Results indicated our assistant agent can improve predictive accuracy and learn target tasks faster when equipped with transfer learning methods.
{"title":"Transfer Learning for Human Navigation and Triage Strategies Prediction in a Simulated Urban Search and Rescue Task","authors":"Yue (Sophie) Guo, Rohit Jena, Dana Hughes, M. Lewis, K. Sycara","doi":"10.1109/RO-MAN50785.2021.9515526","DOIUrl":"https://doi.org/10.1109/RO-MAN50785.2021.9515526","url":null,"abstract":"To build an agent providing assistance to human rescuers in an urban search and rescue task, it is crucial to understand not only human actions but also human beliefs that may influence the decision to take these actions. Developing data-driven models to predict a rescuer’s strategies for navigating the environment and triaging victims requires costly data collection and training for each new environment of interest. Transfer learning approaches can be used to mitigate this challenge, allowing a model trained on a source environment/task to generalize to a previously unseen target environment/task with few training examples. In this paper, we investigate transfer learning (a) from a source environment with smaller number of types of injured victims to one with larger number of victim injury classes and (b) from a smaller and simpler environment to a larger and more complex one for navigation strategy. Inspired by hierarchical organization of human spatial cognition, we used graph division to represent spatial knowledge, and Transfer Learning Diffusion Convo-lutional Recurrent Neural Network (TL-DCRNN), a spatial and temporal graph-based recurrent neural network suitable for transfer learning, to predict navigation. To abstract the rescue strategy from a rescuer’s field-of-view stream, we used attention-based LSTM networks. We experimented on various transfer learning scenarios and evaluated the performance using mean average error. Results indicated our assistant agent can improve predictive accuracy and learn target tasks faster when equipped with transfer learning methods.","PeriodicalId":6854,"journal":{"name":"2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN)","volume":"13 1","pages":"784-791"},"PeriodicalIF":0.0,"publicationDate":"2021-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83687794","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-08DOI: 10.1109/RO-MAN50785.2021.9515504
M. Brandão
Visions have an important role in guiding and legitimizing technical research, as well as contributing to expectations of the general public towards technologies. In this paper we analyze technical robotics papers published between 1998 and 2019 to identify themes, trends and issues with the visions and values promoted by robotics research. In particular, we identify the themes of robotics visions and implicitly normative visions; and we quantify the relative presence of a variety of values and applications within technical papers. We conclude with a discussion of the language of robotics visions, marginalized visions and values, and possible paths forward for the robotics community to better align practice with societal interest. We also discuss implications and future work suggestions for Responsible Robotics and HRI research.
{"title":"Normative roboticists: the visions and values of technical robotics papers","authors":"M. Brandão","doi":"10.1109/RO-MAN50785.2021.9515504","DOIUrl":"https://doi.org/10.1109/RO-MAN50785.2021.9515504","url":null,"abstract":"Visions have an important role in guiding and legitimizing technical research, as well as contributing to expectations of the general public towards technologies. In this paper we analyze technical robotics papers published between 1998 and 2019 to identify themes, trends and issues with the visions and values promoted by robotics research. In particular, we identify the themes of robotics visions and implicitly normative visions; and we quantify the relative presence of a variety of values and applications within technical papers. We conclude with a discussion of the language of robotics visions, marginalized visions and values, and possible paths forward for the robotics community to better align practice with societal interest. We also discuss implications and future work suggestions for Responsible Robotics and HRI research.","PeriodicalId":6854,"journal":{"name":"2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN)","volume":"21 1","pages":"671-677"},"PeriodicalIF":0.0,"publicationDate":"2021-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84142022","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-08DOI: 10.1109/RO-MAN50785.2021.9515453
Mridul Agarwal, Glebys T. Gonzalez, Mythra V. Balakuntala, Md Masudur Rahman, V. Aggarwal, R. Voyles, Yexiang Xue, J. Wachs
In austere environments, teleoperated surgical robots could save the lives of critically injured patients if they can perform complex surgical maneuvers under limited communication bandwidth. The bandwidth requirement is reduced by transferring atomic surgical actions (referred to as “surgemes”) instead of the low-level kinematic information. While such a policy reduces the bandwidth requirement, it requires accurate recognition of the surgemes. In this paper, we demonstrate that transfer learning across surgical tasks can boost the performance of surgeme recognition. This is demonstrated by using a network pre-trained with peg-transfer data from Yumi robot to learn classification on debridement on data from Taurus robot. Using a pre-trained network improves the classification accuracy achieves a classification accuracy of 76% with only 8 sequences in target domain, which is 22.5% better than no-transfer scenario. Additionally, ablations on transfer learning indicate that transfer learning requires 40% less data compared to no-transfer to achieve same classification accuracy. Further, the convergence rate of the transfer learning setup is significantly higher than the no-transfer setup trained only on the target domain.
{"title":"Dexterous Skill Transfer between Surgical Procedures for Teleoperated Robotic Surgery","authors":"Mridul Agarwal, Glebys T. Gonzalez, Mythra V. Balakuntala, Md Masudur Rahman, V. Aggarwal, R. Voyles, Yexiang Xue, J. Wachs","doi":"10.1109/RO-MAN50785.2021.9515453","DOIUrl":"https://doi.org/10.1109/RO-MAN50785.2021.9515453","url":null,"abstract":"In austere environments, teleoperated surgical robots could save the lives of critically injured patients if they can perform complex surgical maneuvers under limited communication bandwidth. The bandwidth requirement is reduced by transferring atomic surgical actions (referred to as “surgemes”) instead of the low-level kinematic information. While such a policy reduces the bandwidth requirement, it requires accurate recognition of the surgemes. In this paper, we demonstrate that transfer learning across surgical tasks can boost the performance of surgeme recognition. This is demonstrated by using a network pre-trained with peg-transfer data from Yumi robot to learn classification on debridement on data from Taurus robot. Using a pre-trained network improves the classification accuracy achieves a classification accuracy of 76% with only 8 sequences in target domain, which is 22.5% better than no-transfer scenario. Additionally, ablations on transfer learning indicate that transfer learning requires 40% less data compared to no-transfer to achieve same classification accuracy. Further, the convergence rate of the transfer learning setup is significantly higher than the no-transfer setup trained only on the target domain.","PeriodicalId":6854,"journal":{"name":"2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN)","volume":"83 1","pages":"1236-1242"},"PeriodicalIF":0.0,"publicationDate":"2021-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83794477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}