Pub Date : 2018-10-01DOI: 10.1109/IROS.2018.8594162
R. Baptista, Benoît Sijobert, C. Azevedo
FES allows spinal cord injured individuals to propel tricycles by means of their own leg power. The stimulation patterns are in most of the cases predefined and muscle activation triggered on the basis of the pedal position. This requires an empirical tuning to fit the pattern to the pilot sitting position and distance to crank with no possible generalization and no adaptive properties. The aim of the present article is to introduce a new approach of motion segmentation based on inertial measurement units located on the cyclist legs with the final aim to predict the optimal pedaling force evolution. Results obtained with one healthy subject in different cycling conditions are presented and the application to FES-cycling discussed.
{"title":"New Approach of Cycling Phases Detection to Improve FES-Pedaling in SCI Individuals","authors":"R. Baptista, Benoît Sijobert, C. Azevedo","doi":"10.1109/IROS.2018.8594162","DOIUrl":"https://doi.org/10.1109/IROS.2018.8594162","url":null,"abstract":"FES allows spinal cord injured individuals to propel tricycles by means of their own leg power. The stimulation patterns are in most of the cases predefined and muscle activation triggered on the basis of the pedal position. This requires an empirical tuning to fit the pattern to the pilot sitting position and distance to crank with no possible generalization and no adaptive properties. The aim of the present article is to introduce a new approach of motion segmentation based on inertial measurement units located on the cyclist legs with the final aim to predict the optimal pedaling force evolution. Results obtained with one healthy subject in different cycling conditions are presented and the application to FES-cycling discussed.","PeriodicalId":6640,"journal":{"name":"2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"143 1","pages":"5181-5186"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86146704","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-10-01DOI: 10.1109/IROS.2018.8594206
James Maley, G. Huang
In this paper, we propose to use unit quaternions to represent point features in visual navigation. Contrary to the Cartesian 3D representation, the unit quaternion can well represent features at both large and small distances from the camera without suffering from convergence problems. Contrary to inverse-depth, homogeneous points, or anchored homogeneous points, the unit quaternion has error state of minimum dimension of three. In contrast to prior representations, the proposed method does not need to approximate an initial infinite depth uncertainty. In fact, the unit-quaternion error covariance can be initialized from the initial feature observations without prior information, and the initial error-states are not only bounded, but the bound is identical for all scene geometries. To the best of our knowledge, this is the first time bearing-only recursive estimation (in covariance form) of point features has been possible without using measurements to initialize error covariance. The proposed unit quaternion-based representation is validated on numerical examples.
{"title":"Unit Quaternion-Based Parameterization for Point Features in Visual Navigation","authors":"James Maley, G. Huang","doi":"10.1109/IROS.2018.8594206","DOIUrl":"https://doi.org/10.1109/IROS.2018.8594206","url":null,"abstract":"In this paper, we propose to use unit quaternions to represent point features in visual navigation. Contrary to the Cartesian 3D representation, the unit quaternion can well represent features at both large and small distances from the camera without suffering from convergence problems. Contrary to inverse-depth, homogeneous points, or anchored homogeneous points, the unit quaternion has error state of minimum dimension of three. In contrast to prior representations, the proposed method does not need to approximate an initial infinite depth uncertainty. In fact, the unit-quaternion error covariance can be initialized from the initial feature observations without prior information, and the initial error-states are not only bounded, but the bound is identical for all scene geometries. To the best of our knowledge, this is the first time bearing-only recursive estimation (in covariance form) of point features has been possible without using measurements to initialize error covariance. The proposed unit quaternion-based representation is validated on numerical examples.","PeriodicalId":6640,"journal":{"name":"2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"1 1","pages":"6880-6886"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83667424","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-10-01DOI: 10.1109/IROS.2018.8594109
Nathan Bucki, M. Mueller
This paper presents a novel quadcopter design with an added momentum wheel for enhanced stability. The novel vehicle has improved torque disturbance rejection capabilities compared to a standard quadcopter. An analysis of the vehicle dynamics shows that the effect of torque disturbances decreases monotonically with increasing angular momentum of the momentum wheel. A framework for choosing the mass moment of inertia and speed of the momentum wheel is given based on an upper bound on the allowable energy stored in the wheel. Theoretical results are experimentally validated by comparing responses to torque impulses applied to the vehicle with and without the momentum wheel spinning.
{"title":"Improved Quadcopter Disturbance Rejection Using Added Angular Momentum","authors":"Nathan Bucki, M. Mueller","doi":"10.1109/IROS.2018.8594109","DOIUrl":"https://doi.org/10.1109/IROS.2018.8594109","url":null,"abstract":"This paper presents a novel quadcopter design with an added momentum wheel for enhanced stability. The novel vehicle has improved torque disturbance rejection capabilities compared to a standard quadcopter. An analysis of the vehicle dynamics shows that the effect of torque disturbances decreases monotonically with increasing angular momentum of the momentum wheel. A framework for choosing the mass moment of inertia and speed of the momentum wheel is given based on an upper bound on the allowable energy stored in the wheel. Theoretical results are experimentally validated by comparing responses to torque impulses applied to the vehicle with and without the momentum wheel spinning.","PeriodicalId":6640,"journal":{"name":"2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"20 1","pages":"4164-4170"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82679078","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-10-01DOI: 10.1109/IROS.2018.8593968
Joshua Hooks, D. Hong
This paper presents a multi-functioning light weight robotic system, the Autonomous Legged Personal Helper Robot with Enhanced Dynamics (ALPHRED), capable of both locomtion and manipulation. In addition, we extended a 2D zero moment point (ZMP) trajectory optimization (TO) algorithm to a 3D implementation. As well as adding the acceleration of the center of mass to the TO cost in order to smooth out the motion of the robot during trajectories with support polygons that do not intersect. By implementing this versatile TO algorithm on a multi-modal robotic platform we showed that many different forms of stable locomotion and manipulation were possible including a dynamic 0.7 m/s trot gait.
{"title":"Implementation of a Versatile 3D ZMP Trajectory Optimization Algorithm on a Multi-Modal Legged Robotic Platform","authors":"Joshua Hooks, D. Hong","doi":"10.1109/IROS.2018.8593968","DOIUrl":"https://doi.org/10.1109/IROS.2018.8593968","url":null,"abstract":"This paper presents a multi-functioning light weight robotic system, the Autonomous Legged Personal Helper Robot with Enhanced Dynamics (ALPHRED), capable of both locomtion and manipulation. In addition, we extended a 2D zero moment point (ZMP) trajectory optimization (TO) algorithm to a 3D implementation. As well as adding the acceleration of the center of mass to the TO cost in order to smooth out the motion of the robot during trajectories with support polygons that do not intersect. By implementing this versatile TO algorithm on a multi-modal robotic platform we showed that many different forms of stable locomotion and manipulation were possible including a dynamic 0.7 m/s trot gait.","PeriodicalId":6640,"journal":{"name":"2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"25 1","pages":"3777-3782"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82738542","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-10-01DOI: 10.1109/IROS.2018.8593395
Roxana Agrigoroaie, Arturo Cruz-Maya, A. Tapus
This paper investigates how individuals react in a situation when an experimenter (human or robot) either tells them to stop in the middle of playing the Jenga game, or accidentally bumps into a table and makes the tower fall down. The mood of the participants and different physiological parameters (i.e., galvanic skin response (GSR) and facial temperature variation) are extracted and analysed based on the condition, experimenter, and psychological questionnaires (i.e., TEQ, TEIQ, RST-PQ). This study was a between participants study with 23 participants. Our results show that multiple GSR parameters (e.g., latency, amplitude, number of peaks) differ significantly based on the condition and the experimenter the participants interacted with. The temperature variation in three regions of interest (i.e., forehead, left, and right periorbital regions) are good indicators of how ready an individual is to react in an unforeseen situation.
{"title":"“Oh! I am so sorry!”: Understanding User Physiological Variation while Spoiling a Game Task","authors":"Roxana Agrigoroaie, Arturo Cruz-Maya, A. Tapus","doi":"10.1109/IROS.2018.8593395","DOIUrl":"https://doi.org/10.1109/IROS.2018.8593395","url":null,"abstract":"This paper investigates how individuals react in a situation when an experimenter (human or robot) either tells them to stop in the middle of playing the Jenga game, or accidentally bumps into a table and makes the tower fall down. The mood of the participants and different physiological parameters (i.e., galvanic skin response (GSR) and facial temperature variation) are extracted and analysed based on the condition, experimenter, and psychological questionnaires (i.e., TEQ, TEIQ, RST-PQ). This study was a between participants study with 23 participants. Our results show that multiple GSR parameters (e.g., latency, amplitude, number of peaks) differ significantly based on the condition and the experimenter the participants interacted with. The temperature variation in three regions of interest (i.e., forehead, left, and right periorbital regions) are good indicators of how ready an individual is to react in an unforeseen situation.","PeriodicalId":6640,"journal":{"name":"2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"1 1","pages":"313-319"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82926043","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-10-01DOI: 10.1109/IROS.2018.8593591
E. Fosch-Villaronga, H. Felzmann, M. Ramos-Montero, T. Mahler
This paper explores ethical and legal implications arising from the intertwinement of cloud services, healthcare and robotics. It closes an existing gap in the literature by highlighting the distinctive ethical and legal concerns associated with the inter-dependence of the cyber- and the physical aspects of healthcare cloud robotics. The identified core concerns include uncertainties with regard to data protection requirements; distributed responsibilities for unintended harm; achievement of transparency and consent for cloud robot services especially for vulnerable robot users; secondary uses of cloud data derived from robot activities; data security; and wider social issues. The paper aims to raise awareness and stimulate reflection of the legal and ethical impacts on different stakeholders arising from the use of cloud services in healthcare robotics. We show that due to the complexity of these concerns the design and implementation of such robots in healthcare requires an interdisciplinary development and impact assessment process. In light of legal requirements and ethical responsibilities towards end-users and other stakeholders, we draw practical considerations for engineers developing cloud services for robots in healthcare.
{"title":"Cloud services for robotic nurses? Assessing legal and ethical issues in the use of cloud services for healthcare robots","authors":"E. Fosch-Villaronga, H. Felzmann, M. Ramos-Montero, T. Mahler","doi":"10.1109/IROS.2018.8593591","DOIUrl":"https://doi.org/10.1109/IROS.2018.8593591","url":null,"abstract":"This paper explores ethical and legal implications arising from the intertwinement of cloud services, healthcare and robotics. It closes an existing gap in the literature by highlighting the distinctive ethical and legal concerns associated with the inter-dependence of the cyber- and the physical aspects of healthcare cloud robotics. The identified core concerns include uncertainties with regard to data protection requirements; distributed responsibilities for unintended harm; achievement of transparency and consent for cloud robot services especially for vulnerable robot users; secondary uses of cloud data derived from robot activities; data security; and wider social issues. The paper aims to raise awareness and stimulate reflection of the legal and ethical impacts on different stakeholders arising from the use of cloud services in healthcare robotics. We show that due to the complexity of these concerns the design and implementation of such robots in healthcare requires an interdisciplinary development and impact assessment process. In light of legal requirements and ethical responsibilities towards end-users and other stakeholders, we draw practical considerations for engineers developing cloud services for robots in healthcare.","PeriodicalId":6640,"journal":{"name":"2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"2 1","pages":"290-296"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86603815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-10-01DOI: 10.1109/IROS.2018.8594158
Mar Hernandez, E. Oña, J. Garcia-Haro, Alberto Jardón Huete, C. Balaguer
Robotics can play a significant role in the rehabilitation of patients with spasticity by improving their early diagnosis and reducing the costs associated with care. Spasticity is a muscle control disorder characterized by an increase in muscle tone with exaggerated stretch reflexes, as one component of the upper motor neuron syndrome. Furthermore, spasticity is present in other pathologies, such as cerebral palsy, spina bifida, brain stroke among others. This video shows the ongoing research on developing a platform for the modelling and the assessment of spasticity using collaborative robots as clinical tool. Our aim is to develop methods for non-invasive biomechanical modelling of upper limbs joints using 7-DOF Rosen Kinematics [1], mixed with a non-linear state of Hills force-velocity relation [2], improved by introducing new parameters such as rigidity, viscoelasticity, extensibility and thixotropy. After a learning phase performed by the therapist, the robot replicates the trajectories required to perform the assessment. The video also describes the detailed analysis of passive movement response (force/torque and position/velocity)of the limb. These parameters will be used to determine the degree of spasticity of patients in a fast and objective manner, while simultaneously developing new clinical scales, such as a modified version of Ashworth [3].
{"title":"Towards an Automatic Spasticity Assessment by Means of Collaborative Robots","authors":"Mar Hernandez, E. Oña, J. Garcia-Haro, Alberto Jardón Huete, C. Balaguer","doi":"10.1109/IROS.2018.8594158","DOIUrl":"https://doi.org/10.1109/IROS.2018.8594158","url":null,"abstract":"Robotics can play a significant role in the rehabilitation of patients with spasticity by improving their early diagnosis and reducing the costs associated with care. Spasticity is a muscle control disorder characterized by an increase in muscle tone with exaggerated stretch reflexes, as one component of the upper motor neuron syndrome. Furthermore, spasticity is present in other pathologies, such as cerebral palsy, spina bifida, brain stroke among others. This video shows the ongoing research on developing a platform for the modelling and the assessment of spasticity using collaborative robots as clinical tool. Our aim is to develop methods for non-invasive biomechanical modelling of upper limbs joints using 7-DOF Rosen Kinematics [1], mixed with a non-linear state of Hills force-velocity relation [2], improved by introducing new parameters such as rigidity, viscoelasticity, extensibility and thixotropy. After a learning phase performed by the therapist, the robot replicates the trajectories required to perform the assessment. The video also describes the detailed analysis of passive movement response (force/torque and position/velocity)of the limb. These parameters will be used to determine the degree of spasticity of patients in a fast and objective manner, while simultaneously developing new clinical scales, such as a modified version of Ashworth [3].","PeriodicalId":6640,"journal":{"name":"2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"9 1","pages":"1-9"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88897082","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-10-01DOI: 10.1109/IROS.2018.8594368
E. Misimi, Alexander Olofsson, A. Eilertsen, Elling Ruud Øye, J. R. Mathiassen
The robotic handling of compliant and deformable food raw materials, characterized by high biological variation, complex geometrical 3D shapes, and mechanical structures and texture, is currently in huge demand in the ocean space, agricultural, and food industries. Many tasks in these industries are performed manually by human operators who, due to the laborious and tedious nature of their tasks, exhibit high variability in execution, with variable outcomes. The introduction of robotic automation for most complex processing tasks has been challenging due to current robot learning policies. A more consistent learning policy involving skilled operators is desired. In this paper, we address the problem of robot learning when presented with inconsistent demonstrations. To this end, we propose a robust learning policy based on Learning from Demonstration (LfD) for robotic grasping of food compliant objects. The approach uses a merging of RGB-D images and tactile data in order to estimate the necessary pose of the gripper, gripper finger configuration and forces exerted on the object in order to achieve effective robot handling. During LfD training, the gripper pose, finger configurations and tactile values for the fingers, as well as RGB-D images are saved. We present an LfD learning policy that automatically removes inconsistent demonstrations, and estimates the teacher's intended policy. The performance of our approach is validated and demonstrated for fragile and compliant food objects with complex 3D shapes. The proposed approach has a vast range of potential applications in the aforementioned industry sectors.
{"title":"Robotic Handling of Compliant Food Objects by Robust Learning from Demonstration","authors":"E. Misimi, Alexander Olofsson, A. Eilertsen, Elling Ruud Øye, J. R. Mathiassen","doi":"10.1109/IROS.2018.8594368","DOIUrl":"https://doi.org/10.1109/IROS.2018.8594368","url":null,"abstract":"The robotic handling of compliant and deformable food raw materials, characterized by high biological variation, complex geometrical 3D shapes, and mechanical structures and texture, is currently in huge demand in the ocean space, agricultural, and food industries. Many tasks in these industries are performed manually by human operators who, due to the laborious and tedious nature of their tasks, exhibit high variability in execution, with variable outcomes. The introduction of robotic automation for most complex processing tasks has been challenging due to current robot learning policies. A more consistent learning policy involving skilled operators is desired. In this paper, we address the problem of robot learning when presented with inconsistent demonstrations. To this end, we propose a robust learning policy based on Learning from Demonstration (LfD) for robotic grasping of food compliant objects. The approach uses a merging of RGB-D images and tactile data in order to estimate the necessary pose of the gripper, gripper finger configuration and forces exerted on the object in order to achieve effective robot handling. During LfD training, the gripper pose, finger configurations and tactile values for the fingers, as well as RGB-D images are saved. We present an LfD learning policy that automatically removes inconsistent demonstrations, and estimates the teacher's intended policy. The performance of our approach is validated and demonstrated for fragile and compliant food objects with complex 3D shapes. The proposed approach has a vast range of potential applications in the aforementioned industry sectors.","PeriodicalId":6640,"journal":{"name":"2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"95 1","pages":"6972-6979"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88840047","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dense mapping is always the desire of simultaneous localization and mapping (SLAM), especially for the applications that require fast and dense scene information. Visual-inertial odometry (VIO) is a light-weight and effective solution to fast self-localization. However, VIO-based SLAM systems have difficulty in providing dense mapping results due to the spatial sparsity and temporal instability of the VIO depth estimations. Although there have been great efforts on real-time mapping and depth recovery from sparse measurements, the existing solutions for VIO-based SLAM still fail to preserve sufficient geometry details in their results. In this paper, we propose to embed depth recovery into VIO-based SLAM for real-time dense mapping. In the proposed method, we present a subspace-based stabilization scheme to maintain the temporal consistency and design a hierarchical pipeline for edge-preserving depth interpolation to reduce the computational burden. Numerous experiments demonstrate that our method can achieve an accuracy improvement of up to 49.1 cm compared to state-of-the-art learning-based methods for depth recovery and reconstruct sufficient geometric details in dense mapping when only 0.07% depth samples are available. Since a simple CPU implementation of our method already runs at 10–20 fps, we believe our method is very favorable for practical SLAM systems with critical computational requirements.
{"title":"Embedding Temporally Consistent Depth Recovery for Real-time Dense Mapping in Visual-inertial Odometry","authors":"Hui Cheng, Zhuoqi Zheng, Jinhao He, Chongyu Chen, Keze Wang, Liang Lin","doi":"10.1109/IROS.2018.8593917","DOIUrl":"https://doi.org/10.1109/IROS.2018.8593917","url":null,"abstract":"Dense mapping is always the desire of simultaneous localization and mapping (SLAM), especially for the applications that require fast and dense scene information. Visual-inertial odometry (VIO) is a light-weight and effective solution to fast self-localization. However, VIO-based SLAM systems have difficulty in providing dense mapping results due to the spatial sparsity and temporal instability of the VIO depth estimations. Although there have been great efforts on real-time mapping and depth recovery from sparse measurements, the existing solutions for VIO-based SLAM still fail to preserve sufficient geometry details in their results. In this paper, we propose to embed depth recovery into VIO-based SLAM for real-time dense mapping. In the proposed method, we present a subspace-based stabilization scheme to maintain the temporal consistency and design a hierarchical pipeline for edge-preserving depth interpolation to reduce the computational burden. Numerous experiments demonstrate that our method can achieve an accuracy improvement of up to 49.1 cm compared to state-of-the-art learning-based methods for depth recovery and reconstruct sufficient geometric details in dense mapping when only 0.07% depth samples are available. Since a simple CPU implementation of our method already runs at 10–20 fps, we believe our method is very favorable for practical SLAM systems with critical computational requirements.","PeriodicalId":6640,"journal":{"name":"2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"8 1","pages":"693-698"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88951565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-10-01DOI: 10.1109/IROS.2018.8594034
Jack Thomas, R. Vaughan
We propose and test an autonomous robot behavior for socially-compliant navigation of doorways with both human and robot interlocutors. Building on previous work for “aggressive” interaction between robots to resolve navigation deadlocks in corridors, we demonstrate an “assertive” robot that negotiates right-of-way when faced with a human or other robot. The negotiation is implemented using only motion and common navigation sensors, without explicit message-passing. Our goal is for the correct agent to take priority, as decided both by time-efficiency and as judged subjectively by naive human participants. Our contribution is a practical method for doorway negotiation, and a study of human users' responses to a robot that appears to participate in existing social customs surrounding doors. Our method is evaluated with robot-robot experiments and a human-robot interaction study with nonexpert users.
{"title":"After You: Doorway Negotiation for Human-Robot and Robot-Robot Interaction","authors":"Jack Thomas, R. Vaughan","doi":"10.1109/IROS.2018.8594034","DOIUrl":"https://doi.org/10.1109/IROS.2018.8594034","url":null,"abstract":"We propose and test an autonomous robot behavior for socially-compliant navigation of doorways with both human and robot interlocutors. Building on previous work for “aggressive” interaction between robots to resolve navigation deadlocks in corridors, we demonstrate an “assertive” robot that negotiates right-of-way when faced with a human or other robot. The negotiation is implemented using only motion and common navigation sensors, without explicit message-passing. Our goal is for the correct agent to take priority, as decided both by time-efficiency and as judged subjectively by naive human participants. Our contribution is a practical method for doorway negotiation, and a study of human users' responses to a robot that appears to participate in existing social customs surrounding doors. Our method is evaluated with robot-robot experiments and a human-robot interaction study with nonexpert users.","PeriodicalId":6640,"journal":{"name":"2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"5 1","pages":"3387-3394"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89180665","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}