Pub Date : 2017-09-01DOI: 10.1109/IROS.2017.8202179
Alaa Adel, M. Seif, Gerold Hölzl, M. Kranz, Slim Abdennadher, I. Khalil
In this study, we develop an electromagnetic-based haptic interface to provide controlled magnetic forces to the operator through a wearable haptic device (an orthopedic finger splint with single dipole moment) without position feedback. First, we model the electromagnetic forces exerted on a single magnetic dipole attached to the wearable haptic device, and derive magnetic force-current mapping for the dipole moment. Second, this mapping is used as basis for parameter selection of the electromagnetic coils of the haptic interface, dipole moment of the wearable haptic device, and the operating workspace of the system. The electromagnetic-based haptic interface enables three-dimensional (3D) virtual object rendering in mid-air within a workspace of 150 mm × 150 mm × 20 mm, using magnetic forces in excess of 50 mN. Participants experimentally demonstrate a 61% success rate in distinguishing the geometry of 4 representative 3D virtual objects. However, our statistical analysis shows that the ability of the participants to distinguish between geometries is not statistically significant, for 95% confidence level.
在这项研究中,我们开发了一种基于电磁的触觉界面,通过可穿戴的触觉设备(具有单偶极矩的骨科手指夹板)向操作员提供可控的磁力,而不需要位置反馈。首先,我们对附着在可穿戴触觉设备上的单个磁偶极子施加的电磁力进行了建模,并推导了偶极矩的磁力-电流映射。其次,将该映射作为触觉界面电磁线圈、可穿戴触觉设备偶极矩和系统工作空间参数选择的依据。基于电磁的触觉界面可以使用超过50 mN的磁力,在150 mm × 150 mm × 20 mm的工作空间内,在半空中绘制三维(3D)虚拟物体。实验证明,参与者在区分4个代表性3D虚拟物体的几何形状方面的成功率为61%。然而,我们的统计分析表明,在95%的置信水平上,参与者区分几何形状的能力在统计上并不显著。
{"title":"Rendering 3D virtual objects in mid-air using controlled magnetic fields","authors":"Alaa Adel, M. Seif, Gerold Hölzl, M. Kranz, Slim Abdennadher, I. Khalil","doi":"10.1109/IROS.2017.8202179","DOIUrl":"https://doi.org/10.1109/IROS.2017.8202179","url":null,"abstract":"In this study, we develop an electromagnetic-based haptic interface to provide controlled magnetic forces to the operator through a wearable haptic device (an orthopedic finger splint with single dipole moment) without position feedback. First, we model the electromagnetic forces exerted on a single magnetic dipole attached to the wearable haptic device, and derive magnetic force-current mapping for the dipole moment. Second, this mapping is used as basis for parameter selection of the electromagnetic coils of the haptic interface, dipole moment of the wearable haptic device, and the operating workspace of the system. The electromagnetic-based haptic interface enables three-dimensional (3D) virtual object rendering in mid-air within a workspace of 150 mm × 150 mm × 20 mm, using magnetic forces in excess of 50 mN. Participants experimentally demonstrate a 61% success rate in distinguishing the geometry of 4 representative 3D virtual objects. However, our statistical analysis shows that the ability of the participants to distinguish between geometries is not statistically significant, for 95% confidence level.","PeriodicalId":6658,"journal":{"name":"2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"4 1","pages":"349-356"},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80926139","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-09-01DOI: 10.1109/IROS.2017.8202231
Lenka Pitonakova, R. Crowder, S. Bullock
Designing and representing control algorithms is challenging in swarm robotics, where the collective swarm performance depends on interactions between robots and with their environment. The currently available modeling languages, such as UML, cannot fully express these interactions. We therefore propose a new, Behaviour-Data Relations Modeling Language (BDRML), where robot behaviours and data that robots utilise, as well as relationships between them, are explicitly represented. This allows BDRML to express control algorithms where robots cooperate and share information with each other while interacting with the environment.
{"title":"Behaviour-data relations modelling language for multi-robot control algorithms","authors":"Lenka Pitonakova, R. Crowder, S. Bullock","doi":"10.1109/IROS.2017.8202231","DOIUrl":"https://doi.org/10.1109/IROS.2017.8202231","url":null,"abstract":"Designing and representing control algorithms is challenging in swarm robotics, where the collective swarm performance depends on interactions between robots and with their environment. The currently available modeling languages, such as UML, cannot fully express these interactions. We therefore propose a new, Behaviour-Data Relations Modeling Language (BDRML), where robot behaviours and data that robots utilise, as well as relationships between them, are explicitly represented. This allows BDRML to express control algorithms where robots cooperate and share information with each other while interacting with the environment.","PeriodicalId":6658,"journal":{"name":"2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"12 1","pages":"727-732"},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80942336","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-09-01DOI: 10.1109/IROS.2017.8206543
Vinay Chawda, G. Niemeyer
In this paper we examine joint torque tracking as well as estimation of external torques for the KUKA Lightweight Robot (LBR) IIWA. To support physical human-robot interaction tasks, we need smooth estimation that allows detection of delicate external events and good control to hide inertial forces. Unfortunately a transmission nonlinearity in the motor to joint gearing injects vibrations and limits the performance of the built-in torque controller and observer. We confirm the nonlinearity to be a spatially periodic deflection between the motor and joint. Identification of this behavior allows us to generate more accurate joint position measurements. We also design a matching spatial filter to remove the vibrations from joint torque measurements. Experiments on an LBR IIWA show that compensating for the nonlinearity provides smoother external torque estimates and improves the torque tracking performance. Furthermore, we are able to increase the gain margin more than three fold over the built-in controller.
{"title":"Toward torque control of a KUKA LBR IIWA for physical human-robot interaction","authors":"Vinay Chawda, G. Niemeyer","doi":"10.1109/IROS.2017.8206543","DOIUrl":"https://doi.org/10.1109/IROS.2017.8206543","url":null,"abstract":"In this paper we examine joint torque tracking as well as estimation of external torques for the KUKA Lightweight Robot (LBR) IIWA. To support physical human-robot interaction tasks, we need smooth estimation that allows detection of delicate external events and good control to hide inertial forces. Unfortunately a transmission nonlinearity in the motor to joint gearing injects vibrations and limits the performance of the built-in torque controller and observer. We confirm the nonlinearity to be a spatially periodic deflection between the motor and joint. Identification of this behavior allows us to generate more accurate joint position measurements. We also design a matching spatial filter to remove the vibrations from joint torque measurements. Experiments on an LBR IIWA show that compensating for the nonlinearity provides smoother external torque estimates and improves the torque tracking performance. Furthermore, we are able to increase the gain margin more than three fold over the built-in controller.","PeriodicalId":6658,"journal":{"name":"2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"4 1","pages":"6387-6392"},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81008124","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-09-01DOI: 10.1109/IROS.2017.8206388
G. Chalvatzaki, X. Papageorgiou, C. Tzafestas
When designing a user-friendly Mobility Assistive Device (MAD) for mobility constrained people, it is important to take into account the diverse spectrum of disabilities, which results to completely different needs to be covered by the MAD for each specific user. An intelligent adaptive behavior is necessary. In this work we present experimental results, using an in house developed methodology for assessing the gait of users with different mobility status while interacting with a robotic MAD. We use data from a laser scanner, mounted on the MAD to track the legs using Particle Filters and Probabilistic Data Association (PDA-PF). The legs' states are fed to an HMM-based pathological gait cycle recognition system to compute in real-time the gait parameters that are crucial for the mobility status characterization of the user. We aim to show that a gait assessment system would be an important feedback for an intelligent MAD. Thus, we use this system to compare the gaits of the subjects using two different control settings of the MAD and we experimentally validate the ability of our system to recognize the impact of the control designs on the users' walking performance. The results demonstrate that a generic control scheme does not meet every patient's needs, and therefore, an Adaptive Context-Aware MAD (ACA MAD), that can understand the specific needs of the user, is important for enhancing the human-robot physical interaction.
{"title":"Towards a user-adaptive context-aware robotic walker with a pathological gait assessment system: First experimental study","authors":"G. Chalvatzaki, X. Papageorgiou, C. Tzafestas","doi":"10.1109/IROS.2017.8206388","DOIUrl":"https://doi.org/10.1109/IROS.2017.8206388","url":null,"abstract":"When designing a user-friendly Mobility Assistive Device (MAD) for mobility constrained people, it is important to take into account the diverse spectrum of disabilities, which results to completely different needs to be covered by the MAD for each specific user. An intelligent adaptive behavior is necessary. In this work we present experimental results, using an in house developed methodology for assessing the gait of users with different mobility status while interacting with a robotic MAD. We use data from a laser scanner, mounted on the MAD to track the legs using Particle Filters and Probabilistic Data Association (PDA-PF). The legs' states are fed to an HMM-based pathological gait cycle recognition system to compute in real-time the gait parameters that are crucial for the mobility status characterization of the user. We aim to show that a gait assessment system would be an important feedback for an intelligent MAD. Thus, we use this system to compare the gaits of the subjects using two different control settings of the MAD and we experimentally validate the ability of our system to recognize the impact of the control designs on the users' walking performance. The results demonstrate that a generic control scheme does not meet every patient's needs, and therefore, an Adaptive Context-Aware MAD (ACA MAD), that can understand the specific needs of the user, is important for enhancing the human-robot physical interaction.","PeriodicalId":6658,"journal":{"name":"2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"42 1","pages":"5037-5042"},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81091891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-09-01DOI: 10.1109/IROS.2017.8206250
George H. Z. Liu, Michael Z. Q. Chen, Yonghua Chen, Lixi Huang
Current research interests for humanoid robots include cost, static stability, and human-likeness. With the help of well-developed locomotion controllers, human-like gait has been realized on humanoid robots with higher energy efficiency and increased motion speed. However, such results are based on the walking function of bipedal robots, running function has been less touched or investigated. Different from walking, running could cause more injuries when human cope with the impact from the ground. In this paper, we define a new foot strike pattern that describes the instinctive human running manner and the performance of a group of adults is collected and analysed for comparison of different foot strike patterns. This helps with lower limbs' injury prevention by proving that most of the impact transient is reduced in the new foot strike pattern for both the shod and the barefoot conditions. Therefore, the newly defined foot strike pattern can be considered as an alternative option to reduce the injury of running activities. Furthermore, it would be possible to apply the foot strike pattern mechanism in humanoid robots, not only to extend the service life from the collision and vibration damage of the components, but also to make robots more human-like.
{"title":"When joggers meet robots: A preliminary study on foot strike patterns","authors":"George H. Z. Liu, Michael Z. Q. Chen, Yonghua Chen, Lixi Huang","doi":"10.1109/IROS.2017.8206250","DOIUrl":"https://doi.org/10.1109/IROS.2017.8206250","url":null,"abstract":"Current research interests for humanoid robots include cost, static stability, and human-likeness. With the help of well-developed locomotion controllers, human-like gait has been realized on humanoid robots with higher energy efficiency and increased motion speed. However, such results are based on the walking function of bipedal robots, running function has been less touched or investigated. Different from walking, running could cause more injuries when human cope with the impact from the ground. In this paper, we define a new foot strike pattern that describes the instinctive human running manner and the performance of a group of adults is collected and analysed for comparison of different foot strike patterns. This helps with lower limbs' injury prevention by proving that most of the impact transient is reduced in the new foot strike pattern for both the shod and the barefoot conditions. Therefore, the newly defined foot strike pattern can be considered as an alternative option to reduce the injury of running activities. Furthermore, it would be possible to apply the foot strike pattern mechanism in humanoid robots, not only to extend the service life from the collision and vibration damage of the components, but also to make robots more human-like.","PeriodicalId":6658,"journal":{"name":"2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"50 1","pages":"3971-3976"},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82040651","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-09-01DOI: 10.1109/IROS.2017.8202162
Jurgen Scherer, B. Rinner
The strong resource limitations of unmanned aerial vehicles (UAVs) pose various challenges for UAV applications. In persistent multi-UAV surveillance, several UAVs with limited communication range and flight time have to repeatedly visit sensing locations while maintaining a multi-hop connection to the base station. In order to achieve persistence, the UAVs need to fly back to the base station in time for recharge. However, simple motion planning algorithms can result in mutual movement obstructions of UAVs caused by the constraints. We introduce two planning algorithms with different planning horizons and cooperation and compare their performance in simulation studies. It can be seen that the short horizon uncooperative strategy can outperform other strategies if a sufficient number of UAVs is used. The full horizon strategy can generate a solution visiting all sensing locations if the existence conditions for such a solution are fulfilled.
{"title":"Short and full horizon motion planning for persistent multi-UAV surveillance with energy and communication constraints","authors":"Jurgen Scherer, B. Rinner","doi":"10.1109/IROS.2017.8202162","DOIUrl":"https://doi.org/10.1109/IROS.2017.8202162","url":null,"abstract":"The strong resource limitations of unmanned aerial vehicles (UAVs) pose various challenges for UAV applications. In persistent multi-UAV surveillance, several UAVs with limited communication range and flight time have to repeatedly visit sensing locations while maintaining a multi-hop connection to the base station. In order to achieve persistence, the UAVs need to fly back to the base station in time for recharge. However, simple motion planning algorithms can result in mutual movement obstructions of UAVs caused by the constraints. We introduce two planning algorithms with different planning horizons and cooperation and compare their performance in simulation studies. It can be seen that the short horizon uncooperative strategy can outperform other strategies if a sufficient number of UAVs is used. The full horizon strategy can generate a solution visiting all sensing locations if the existence conditions for such a solution are fulfilled.","PeriodicalId":6658,"journal":{"name":"2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"18 1","pages":"230-235"},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78591429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-09-01DOI: 10.1109/IROS.2017.8206134
Masaru Takeuchi, Yuki Nakamura, A. Ichikawa, Akiyuki Hasegawa, Y. Hasegawa, T. Fukuda
In this research, we fabricated movable toroidal cell structures inside a microfluidic device for tissue engineering applications. A photo-crosslinkable biodegradable hydrogel gelatin methacrylate (GelMA) was employed to encapsulate biological cells for assembling cell structures. The UV light power and the concentration GelMA hydrogel were optimized to achieve both fabrication of microstructures and live condition of cells. The two-layered toroidal cell structures were fabricated which can mimic the multi-layered structure of blood vessels. The movable microstructures were achieved by the water repellent coating on the substrate surface. Finally, on-chip fabrication of GelMA microstructures and peeling off of the fabricated microstructures were achieved using a microfluidic chip. The results indicate that the fabricated movable GelMA microstructures can be used for further three dimensional assembly to achieve vascular-like tube structures.
{"title":"On-chip fabrication of movable toroidal cell structures using photo-crosslinkable biodegradable hydrogel","authors":"Masaru Takeuchi, Yuki Nakamura, A. Ichikawa, Akiyuki Hasegawa, Y. Hasegawa, T. Fukuda","doi":"10.1109/IROS.2017.8206134","DOIUrl":"https://doi.org/10.1109/IROS.2017.8206134","url":null,"abstract":"In this research, we fabricated movable toroidal cell structures inside a microfluidic device for tissue engineering applications. A photo-crosslinkable biodegradable hydrogel gelatin methacrylate (GelMA) was employed to encapsulate biological cells for assembling cell structures. The UV light power and the concentration GelMA hydrogel were optimized to achieve both fabrication of microstructures and live condition of cells. The two-layered toroidal cell structures were fabricated which can mimic the multi-layered structure of blood vessels. The movable microstructures were achieved by the water repellent coating on the substrate surface. Finally, on-chip fabrication of GelMA microstructures and peeling off of the fabricated microstructures were achieved using a microfluidic chip. The results indicate that the fabricated movable GelMA microstructures can be used for further three dimensional assembly to achieve vascular-like tube structures.","PeriodicalId":6658,"journal":{"name":"2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"59 1","pages":"2980-2985"},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85833886","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-09-01DOI: 10.1109/IROS.2017.8206167
Seigo Ito, S. Hiratsuka, M. Ohta, H. Matsubara, Masaru Ogawa
Small 3D LIDAR and a multimodal-based localization are fundamentally important for autonomous robots. This paper describes presentation and demonstration of a sensor and a method for LIDAR-image based localization. Our small LIDAR, named SPAD LIDAR, uses a single-photon avalanche diode (SPAD). The SPAD LIDAR incorporates laser receiver and environmental light receiver in a single chip. Therefore, the sensor simultaneously outputs range data and monocular image data. By virtue of this structure, the sensor requires no external calibration between range data and monocular image data. Based on this sensor, we introduce a localization method using a deep convolutional neural network (SPAD DCNN), which fuses SPAD LIDAR outputs: range data, monocular image data, and peak intensity data. Our method regresses LIDAR's position in an environment. We also introduce improved SPAD DCNN, designated as Fast SPAD DCNN. To reduce the computational demands of SPAD DCNN, Fast SPAD DCNN integrates range data and peak intensity data. The integrated data reduces runtime without greatly increasing localization error compared to the conventional method. We evaluate our SPAD DCNN and Fast SPAD DCNN localization method in indoor environments and compare its performance. Results show that SPAD DCNN and Fast SPAD DCNN improve localization in terms of accuracy and runtime.
{"title":"SPAD DCNN: Localization with small imaging LIDAR and DCNN","authors":"Seigo Ito, S. Hiratsuka, M. Ohta, H. Matsubara, Masaru Ogawa","doi":"10.1109/IROS.2017.8206167","DOIUrl":"https://doi.org/10.1109/IROS.2017.8206167","url":null,"abstract":"Small 3D LIDAR and a multimodal-based localization are fundamentally important for autonomous robots. This paper describes presentation and demonstration of a sensor and a method for LIDAR-image based localization. Our small LIDAR, named SPAD LIDAR, uses a single-photon avalanche diode (SPAD). The SPAD LIDAR incorporates laser receiver and environmental light receiver in a single chip. Therefore, the sensor simultaneously outputs range data and monocular image data. By virtue of this structure, the sensor requires no external calibration between range data and monocular image data. Based on this sensor, we introduce a localization method using a deep convolutional neural network (SPAD DCNN), which fuses SPAD LIDAR outputs: range data, monocular image data, and peak intensity data. Our method regresses LIDAR's position in an environment. We also introduce improved SPAD DCNN, designated as Fast SPAD DCNN. To reduce the computational demands of SPAD DCNN, Fast SPAD DCNN integrates range data and peak intensity data. The integrated data reduces runtime without greatly increasing localization error compared to the conventional method. We evaluate our SPAD DCNN and Fast SPAD DCNN localization method in indoor environments and compare its performance. Results show that SPAD DCNN and Fast SPAD DCNN improve localization in terms of accuracy and runtime.","PeriodicalId":6658,"journal":{"name":"2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"67 1","pages":"3312-3317"},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85845278","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-09-01DOI: 10.1109/IROS.2017.8206577
Md. Mahbubur Rahman, Leonardo Bobadilla, Franklin Abodo, B. Rapp
In this paper, we solve the problem of relay robot placement in multi-robot missions to establish or enhance communication between a static operator and a number of remote units in an environment with known obstacles. We study the hardness of two different relay placement problems: 1) a chain formation of multiple relay robots to transmit information from an operator to a single unit; and 2) a spanning tree of relays connecting multiple remote units to the operator. We first build a communication map data structure from a layered graph that contains the positions of the relays as the unit moves. This structure is computed once and reused throughout the mission, significantly reducing plan re-computation time when compared to the best-known solution in the literature. Second, we create a min-arborescence tree that forms a connected component among the operator, relays, and units, and that has an optimal communication cost. Finally, we validate our ideas through software simulations, hardware experiments, and a comparison of our approach to state-of-the-art methods.
{"title":"Relay vehicle formations for optimizing communication quality in robot networks","authors":"Md. Mahbubur Rahman, Leonardo Bobadilla, Franklin Abodo, B. Rapp","doi":"10.1109/IROS.2017.8206577","DOIUrl":"https://doi.org/10.1109/IROS.2017.8206577","url":null,"abstract":"In this paper, we solve the problem of relay robot placement in multi-robot missions to establish or enhance communication between a static operator and a number of remote units in an environment with known obstacles. We study the hardness of two different relay placement problems: 1) a chain formation of multiple relay robots to transmit information from an operator to a single unit; and 2) a spanning tree of relays connecting multiple remote units to the operator. We first build a communication map data structure from a layered graph that contains the positions of the relays as the unit moves. This structure is computed once and reused throughout the mission, significantly reducing plan re-computation time when compared to the best-known solution in the literature. Second, we create a min-arborescence tree that forms a connected component among the operator, relays, and units, and that has an optimal communication cost. Finally, we validate our ideas through software simulations, hardware experiments, and a comparison of our approach to state-of-the-art methods.","PeriodicalId":6658,"journal":{"name":"2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"31 1","pages":"6633-6639"},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86053235","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-09-01DOI: 10.1109/IROS.2017.8206154
Etienne Roberge, Vincent Duchaine
Today, collaborative robots are often taught new tasks through “teaching by demonstration” techniques rather than manual programming. This works well for many tasks; however, some tasks like precise tight-fitting insertions can be hard to recreate through exact position replays because they also involve forces and are highly affected by the robot's repeatability and the position of the object in the hand. As of yet there is no way to automatically detect when procedures to reduce position uncertainty should be used. In this paper, we present a new way to automatically detect insertion tasks during impedance control-based trajectory teaching. This is accomplished by recording the forces and torques applied by the operator and inputting these signals to a convolutional neural network. The convolutional neural network is used to extract important features of the spatio-temporal forces and torque signals for distinguishing insertion tasks. Eventually, this method could help robots understand the tasks they are taught at a higher level. They will not only be capable of a position-time replay of the task, but will also recognize the best strategy to apply in order to accomplish the task (in this case insertion). Our method was tested on data obtained from 886 experiments that were conducted on eight different in-hand objects. Results show that we can distinguish insertion tasks from pick-and-place tasks with an average accuracy of 82%.
{"title":"Detecting insertion tasks using convolutional neural networks during robot teaching-by-demonstration","authors":"Etienne Roberge, Vincent Duchaine","doi":"10.1109/IROS.2017.8206154","DOIUrl":"https://doi.org/10.1109/IROS.2017.8206154","url":null,"abstract":"Today, collaborative robots are often taught new tasks through “teaching by demonstration” techniques rather than manual programming. This works well for many tasks; however, some tasks like precise tight-fitting insertions can be hard to recreate through exact position replays because they also involve forces and are highly affected by the robot's repeatability and the position of the object in the hand. As of yet there is no way to automatically detect when procedures to reduce position uncertainty should be used. In this paper, we present a new way to automatically detect insertion tasks during impedance control-based trajectory teaching. This is accomplished by recording the forces and torques applied by the operator and inputting these signals to a convolutional neural network. The convolutional neural network is used to extract important features of the spatio-temporal forces and torque signals for distinguishing insertion tasks. Eventually, this method could help robots understand the tasks they are taught at a higher level. They will not only be capable of a position-time replay of the task, but will also recognize the best strategy to apply in order to accomplish the task (in this case insertion). Our method was tested on data obtained from 886 experiments that were conducted on eight different in-hand objects. Results show that we can distinguish insertion tasks from pick-and-place tasks with an average accuracy of 82%.","PeriodicalId":6658,"journal":{"name":"2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"11 1","pages":"3210-3216"},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84108304","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}