Pub Date : 2017-12-13DOI: 10.1109/IROS.2017.8206104
Urja Acharya, Alisha Bevins, Brittany A. Duncan
This paper presents an investigation of human comfort with a small Unmanned Aerial Vehicle (sUAV) through a study offering a comparison of comfort with a sUAV versus a ground vehicle. Current research on human comfort with sUAVs has been limited to a single previous study, which did not include free flight, and while ground vehicle distancing has been studied, it has never been directly compared to a sUAV. The novelty in the approach is the use of a motion capture room to achieve smooth trajectories and precise measurements, while conducting the first free flight study to compare human comfort after interaction with aerial versus ground vehicles (within subjects, N=16). These results will contribute to understanding of social, collaborative, and assistive robots, with implications for general human-robot interactions as they evolve to include aerial vehicles. Based on the reduced stress and distance (36.5cm or 1.2ft) for ground vehicles and increased stress and distance (65.5cm or 2.15ft) for sUAVs, it is recommended that studies be conducted to understand the implications of design features on comfort in interactions with sUAVs and how they differ from those with ground robots.
{"title":"Investigation of human-robot comfort with a small Unmanned Aerial Vehicle compared to a ground robot","authors":"Urja Acharya, Alisha Bevins, Brittany A. Duncan","doi":"10.1109/IROS.2017.8206104","DOIUrl":"https://doi.org/10.1109/IROS.2017.8206104","url":null,"abstract":"This paper presents an investigation of human comfort with a small Unmanned Aerial Vehicle (sUAV) through a study offering a comparison of comfort with a sUAV versus a ground vehicle. Current research on human comfort with sUAVs has been limited to a single previous study, which did not include free flight, and while ground vehicle distancing has been studied, it has never been directly compared to a sUAV. The novelty in the approach is the use of a motion capture room to achieve smooth trajectories and precise measurements, while conducting the first free flight study to compare human comfort after interaction with aerial versus ground vehicles (within subjects, N=16). These results will contribute to understanding of social, collaborative, and assistive robots, with implications for general human-robot interactions as they evolve to include aerial vehicles. Based on the reduced stress and distance (36.5cm or 1.2ft) for ground vehicles and increased stress and distance (65.5cm or 2.15ft) for sUAVs, it is recommended that studies be conducted to understand the implications of design features on comfort in interactions with sUAVs and how they differ from those with ground robots.","PeriodicalId":6658,"journal":{"name":"2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"11 1","pages":"2758-2765"},"PeriodicalIF":0.0,"publicationDate":"2017-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79842291","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-12-13DOI: 10.1109/IROS.2017.8206096
Kui-Ting Chen, Mitsuhiro Kamezaki, Takahiro Katano, Taisei Kaneko, Kohga Azuma, T. Ishida, M. Seki, Ken Ichiryu, S. Sugano
Disaster response crawler robot OCTOPUS has four arms and four flippers for better adaptability to disaster environments. To further improve the robot mobility and terrain adaptability in unstructured terrain, we propose a new locomotion control method called compound motion pattern (CMP) for multi-limb robots like OCTOPUS. This hybrid locomotion by cooperating the arms and flippers would be effective to adapt to the unstructured terrain due to combining the advantages of crawling and walking. As a preliminary study on CMP, we proposed a fundamental and conceptual CMP while clarifying problems in constructing CMP, and developed a semi-autonomous control system for realizing the CMP. Electrically-driven OCTOPUS was used to verify the reliability and correctness of CMP. Results of experiments on climbing a step indicate that the proposed control system could obtain relatively accurate terrain information and the CMP enabled the robot to climb the step. We thus confirmed that the proposed CMP would be effective to increase terrain adaptability of robot in unstructured environment, and it would be a useful locomotion method for advanced disaster response robots.
{"title":"A semi-autonomous compound motion pattern using multi-flipper and multi-arm for unstructured terrain traversal","authors":"Kui-Ting Chen, Mitsuhiro Kamezaki, Takahiro Katano, Taisei Kaneko, Kohga Azuma, T. Ishida, M. Seki, Ken Ichiryu, S. Sugano","doi":"10.1109/IROS.2017.8206096","DOIUrl":"https://doi.org/10.1109/IROS.2017.8206096","url":null,"abstract":"Disaster response crawler robot OCTOPUS has four arms and four flippers for better adaptability to disaster environments. To further improve the robot mobility and terrain adaptability in unstructured terrain, we propose a new locomotion control method called compound motion pattern (CMP) for multi-limb robots like OCTOPUS. This hybrid locomotion by cooperating the arms and flippers would be effective to adapt to the unstructured terrain due to combining the advantages of crawling and walking. As a preliminary study on CMP, we proposed a fundamental and conceptual CMP while clarifying problems in constructing CMP, and developed a semi-autonomous control system for realizing the CMP. Electrically-driven OCTOPUS was used to verify the reliability and correctness of CMP. Results of experiments on climbing a step indicate that the proposed control system could obtain relatively accurate terrain information and the CMP enabled the robot to climb the step. We thus confirmed that the proposed CMP would be effective to increase terrain adaptability of robot in unstructured environment, and it would be a useful locomotion method for advanced disaster response robots.","PeriodicalId":6658,"journal":{"name":"2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"38 1","pages":"2704-2709"},"PeriodicalIF":0.0,"publicationDate":"2017-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83560844","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-12-13DOI: 10.1109/IROS.2017.8206598
Francesco Rovida, Bjarne Großmann, V. Krüger
The requirement of flexibility in the modern industries demands robots that can be efficiently and quickly adapted to different tasks. A way to achieve such a flexible programming paradigm is to instruct robots with task goals and leave planning algorithms to deduct the correct sequence of actions to use in the specific context. A common approach is to connect the skills that realize a semantically defined operation in the planning domain — such as picking or placing an object — to specific executable functions. As a result the skills are treated as independent components, which results into suboptimal execution. In this paper we present an approach where the execution procedures and the planning domain are specified at the same time using solely extended Behavior Trees (eBT), a model formalized and discussed in this paper. At run-time, the robot can use the more abstract skills to plan a sequence using a PDDL planner, expand the sequence into a hierarchical tree, and re-organize it to optimize the time of execution and the use of resources. The optimization is demonstrated on a kitting operation in both simulation and lab environment, showing up to 20% save in the final execution time.
{"title":"Extended behavior trees for quick definition of flexible robotic tasks","authors":"Francesco Rovida, Bjarne Großmann, V. Krüger","doi":"10.1109/IROS.2017.8206598","DOIUrl":"https://doi.org/10.1109/IROS.2017.8206598","url":null,"abstract":"The requirement of flexibility in the modern industries demands robots that can be efficiently and quickly adapted to different tasks. A way to achieve such a flexible programming paradigm is to instruct robots with task goals and leave planning algorithms to deduct the correct sequence of actions to use in the specific context. A common approach is to connect the skills that realize a semantically defined operation in the planning domain — such as picking or placing an object — to specific executable functions. As a result the skills are treated as independent components, which results into suboptimal execution. In this paper we present an approach where the execution procedures and the planning domain are specified at the same time using solely extended Behavior Trees (eBT), a model formalized and discussed in this paper. At run-time, the robot can use the more abstract skills to plan a sequence using a PDDL planner, expand the sequence into a hierarchical tree, and re-organize it to optimize the time of execution and the use of resources. The optimization is demonstrated on a kitting operation in both simulation and lab environment, showing up to 20% save in the final execution time.","PeriodicalId":6658,"journal":{"name":"2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"42 1","pages":"6793-6800"},"PeriodicalIF":0.0,"publicationDate":"2017-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85365492","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-12-13DOI: 10.1109/IROS.2017.8206344
Athanasios S. Polydoros, Evangelos Boukas, L. Nalpantidis
Inverse dynamics models are applied to a plethora of robot control tasks such as computed-torque control, which are essential for trajectory execution. The analytical derivation of such dynamics models for robotic manipulators can be challenging and depends on their physical characteristics. This paper proposes a machine learning approach for modeling inverse dynamics and provides information about its implementation on a physical robotic system. The proposed algorithm can perform online multi-target learning, thus allowing efficient implementations on real robots. Our approach has been tested both offline, on datasets captured from three different robotic systems and online, on a physical system. The proposed algorithm exhibits state-of-the-art performance in terms of generalization ability and convergence. Furthermore, it has been implemented within ROS for controlling a Baxter robot. Evaluation results show that its performance is comparable to the built-in inverse dynamics model of the robot.
{"title":"Online multi-target learning of inverse dynamics models for computed-torque control of compliant manipulators","authors":"Athanasios S. Polydoros, Evangelos Boukas, L. Nalpantidis","doi":"10.1109/IROS.2017.8206344","DOIUrl":"https://doi.org/10.1109/IROS.2017.8206344","url":null,"abstract":"Inverse dynamics models are applied to a plethora of robot control tasks such as computed-torque control, which are essential for trajectory execution. The analytical derivation of such dynamics models for robotic manipulators can be challenging and depends on their physical characteristics. This paper proposes a machine learning approach for modeling inverse dynamics and provides information about its implementation on a physical robotic system. The proposed algorithm can perform online multi-target learning, thus allowing efficient implementations on real robots. Our approach has been tested both offline, on datasets captured from three different robotic systems and online, on a physical system. The proposed algorithm exhibits state-of-the-art performance in terms of generalization ability and convergence. Furthermore, it has been implemented within ROS for controlling a Baxter robot. Evaluation results show that its performance is comparable to the built-in inverse dynamics model of the robot.","PeriodicalId":6658,"journal":{"name":"2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"88 1","pages":"4716-4722"},"PeriodicalIF":0.0,"publicationDate":"2017-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84321117","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-12-13DOI: 10.1109/IROS.2017.8206517
Dimitris Zermas, V. Morellas, D. Mulla, N. Papanikolopoulos
Financial and social elements of modern societies are closely connected to the cultivation of corn. Due to the massive production of corn, deficiencies during the cultivation process directly translate to major financial losses. The early detection and treatment of crops deficiencies is thus a task of great significance. Towards an automated health condition assessment, this study introduces a scheme for the computation of plant health indices. Based on the 3D reconstruction of small batches of corn plants, an alternative to existing cumbersome Leaf Area Index (LAI) estimation methodologies is presented. The use of 3D models provides an elevated information content, when compared to planar methods, mainly due to the reduced loss attributed to leaf occlusions. High resolution images of corn stalks are collected and used to obtain 3D models of plants of interest. Based on the extracted 3D point clouds, an accurate calculation of the Leaf Area Index (LAI) of the plants is performed. An experimental validation (using artificially made corn plants used as ground truth of the LAI estimation), emulating real world scenarios, supports the efficacy of the proposed methodology. The conclusions of this work, suggest a fully automated scheme for information gathering in modern farms capable of replacing current labor intensive procedures, thus greatly impacting the timely detection of crop deficiencies.
{"title":"Estimating the Leaf Area Index of crops through the evaluation of 3D models","authors":"Dimitris Zermas, V. Morellas, D. Mulla, N. Papanikolopoulos","doi":"10.1109/IROS.2017.8206517","DOIUrl":"https://doi.org/10.1109/IROS.2017.8206517","url":null,"abstract":"Financial and social elements of modern societies are closely connected to the cultivation of corn. Due to the massive production of corn, deficiencies during the cultivation process directly translate to major financial losses. The early detection and treatment of crops deficiencies is thus a task of great significance. Towards an automated health condition assessment, this study introduces a scheme for the computation of plant health indices. Based on the 3D reconstruction of small batches of corn plants, an alternative to existing cumbersome Leaf Area Index (LAI) estimation methodologies is presented. The use of 3D models provides an elevated information content, when compared to planar methods, mainly due to the reduced loss attributed to leaf occlusions. High resolution images of corn stalks are collected and used to obtain 3D models of plants of interest. Based on the extracted 3D point clouds, an accurate calculation of the Leaf Area Index (LAI) of the plants is performed. An experimental validation (using artificially made corn plants used as ground truth of the LAI estimation), emulating real world scenarios, supports the efficacy of the proposed methodology. The conclusions of this work, suggest a fully automated scheme for information gathering in modern farms capable of replacing current labor intensive procedures, thus greatly impacting the timely detection of crop deficiencies.","PeriodicalId":6658,"journal":{"name":"2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"1 1","pages":"6155-6162"},"PeriodicalIF":0.0,"publicationDate":"2017-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90111836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-12-13DOI: 10.1109/IROS.2017.8206108
Yusuke Takimoto, Komei Hasegawa, Taichi Sono, M. Imai
It is important for a robot to appropriately respond to its surrounding environment and events to communicate smoothly with humans as opposed to following fixed movements specified in advance. In this paper, a simple bi-layered architecture (SB architecture) is proposed to integrate behaviors in two stages: prioritization and weighted averaging. In addition, SB architecture integrates voluntary movements by prioritization and integrates involuntary and reflex movements by weighted averaging, thereby generating robot behaviors that immediately respond to events that occur around the robot. What advantages SB architecture has are that it can easily generate various behaviors by combining multiple behaviors and that it is possible to propose a simple design for robot behavior. Furthermore, since behavior occurs in response to sensors, the robot's behaviors are reactive to the surrounding environment and events. In particular, if a robot performs the behavior necessary for communication with humans, such as changing its gaze and gestures, it is possible to ensure that the robot possesses liveness to promote communication with humans. In addition, since SB architecture has a simple structure to facilitate the design of robots, it is possible to automatically set parameters by optimizing the parameters using pairs of sensor information and ideal behaviors. The optimization of the parameters leads to generating behaviors characterized with liveness by appropriately combining behaviors.
{"title":"A simple bi-layered architecture to enhance the liveness of a robot","authors":"Yusuke Takimoto, Komei Hasegawa, Taichi Sono, M. Imai","doi":"10.1109/IROS.2017.8206108","DOIUrl":"https://doi.org/10.1109/IROS.2017.8206108","url":null,"abstract":"It is important for a robot to appropriately respond to its surrounding environment and events to communicate smoothly with humans as opposed to following fixed movements specified in advance. In this paper, a simple bi-layered architecture (SB architecture) is proposed to integrate behaviors in two stages: prioritization and weighted averaging. In addition, SB architecture integrates voluntary movements by prioritization and integrates involuntary and reflex movements by weighted averaging, thereby generating robot behaviors that immediately respond to events that occur around the robot. What advantages SB architecture has are that it can easily generate various behaviors by combining multiple behaviors and that it is possible to propose a simple design for robot behavior. Furthermore, since behavior occurs in response to sensors, the robot's behaviors are reactive to the surrounding environment and events. In particular, if a robot performs the behavior necessary for communication with humans, such as changing its gaze and gestures, it is possible to ensure that the robot possesses liveness to promote communication with humans. In addition, since SB architecture has a simple structure to facilitate the design of robots, it is possible to automatically set parameters by optimizing the parameters using pairs of sensor information and ideal behaviors. The optimization of the parameters leads to generating behaviors characterized with liveness by appropriately combining behaviors.","PeriodicalId":6658,"journal":{"name":"2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"106 1","pages":"2786-2792"},"PeriodicalIF":0.0,"publicationDate":"2017-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73650467","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-12-13DOI: 10.1109/IROS.2017.8206565
Xiao Sun, K. Hashimoto, Tomotaka Teramachi, T. Matsuzawa, S. Kimura, Nobuaki Sakai, S. Hayashi, Y. Yoshida, A. Takanishi
This paper describes an approach that enables the four-limbed robot “WAREC-1” to climb up and down vertical ladders stably. First, the four-limbed robot “WAREC-1” is introduced and dynamic stability conditions in multi-mass model for ladder climbing are proposed as the basis of judging whether a four-limbed robot is stable or not while climbing a vertical ladder. According to the proposed stability conditions, 3 different types of moment will directly affect the stability of the robot on a ladder: gravitational moment, inertial moment and reaction force moment. With the analysis of these 3 kinds of moments and the relationship among them, stability control methods are proposed to maintain stability of the robot on a ladder to the greatest degree and avoid their mutual interference. Combining with the stability conditions and stability control proposed, stable motion planning of climbing up and down a vertical ladder, a motion planning method proposed by the authors that allows independent path and time planning in trajectory planning is also applied to reinforce the efficiency of the stability control. Eventually, results from the simulation and physical robot verify the validity of the proposed control methods.
{"title":"Planning and control of stable ladder climbing motion for the four-limbed Robot “WAREC-1”","authors":"Xiao Sun, K. Hashimoto, Tomotaka Teramachi, T. Matsuzawa, S. Kimura, Nobuaki Sakai, S. Hayashi, Y. Yoshida, A. Takanishi","doi":"10.1109/IROS.2017.8206565","DOIUrl":"https://doi.org/10.1109/IROS.2017.8206565","url":null,"abstract":"This paper describes an approach that enables the four-limbed robot “WAREC-1” to climb up and down vertical ladders stably. First, the four-limbed robot “WAREC-1” is introduced and dynamic stability conditions in multi-mass model for ladder climbing are proposed as the basis of judging whether a four-limbed robot is stable or not while climbing a vertical ladder. According to the proposed stability conditions, 3 different types of moment will directly affect the stability of the robot on a ladder: gravitational moment, inertial moment and reaction force moment. With the analysis of these 3 kinds of moments and the relationship among them, stability control methods are proposed to maintain stability of the robot on a ladder to the greatest degree and avoid their mutual interference. Combining with the stability conditions and stability control proposed, stable motion planning of climbing up and down a vertical ladder, a motion planning method proposed by the authors that allows independent path and time planning in trajectory planning is also applied to reinforce the efficiency of the stability control. Eventually, results from the simulation and physical robot verify the validity of the proposed control methods.","PeriodicalId":6658,"journal":{"name":"2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"80 1","pages":"6547-6554"},"PeriodicalIF":0.0,"publicationDate":"2017-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86971155","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-12-13DOI: 10.1109/IROS.2017.8202195
Yu Kuwajima, Hiroki Shigemune, Vito Cacucciolo, M. Cianchetti, C. Laschi, S. Maeda
Designing and manufacturing actuators using soft materials are among the most important subjects for future robotics. In nature, animals made by soft tissues such as the octopus have attracted the attention of the robotics community in the last years. Suckers (or suction cups) are one of the most important and peculiar organs of the octopus body, giving it the ability to apply high forces on the external environment. The integration of suction cups in soft robots can enhance their ability to manipulate objects and interact with the environment similarly to what the octopus does. However, artificial suction cups are currently actuated using fluid pressure so most of them require external compressors, which will greatly increase the size of the soft robot. In this work, we proposed the use of the ElectroHydroDynamics (EHD) principle to actuate a suction cup. EHD is a fluidic phenomenon coupled with electrochemical reaction that can induce pressure through the application of a high-intensity electric field. We succeeded in developing a suction cup driven by EHD keeping the whole structure extremely simple, fabricated by using a 3D printer and a cutting plotter. We can control the adhesion of the suction cup by controlling the direction of the fluidic flow in our EHD pump. Thanks to a symmetrical arrangement of the electrodes, composed by plates parallel to the direction of the channel, we can change the direction of the flow by changing the sign of the applied voltage. We obtained the pressure of 643 Pa in one unit of EHD pump and pressure of 1428 Pa in five units of EHD pump applying 6 kV. The suction cup actuator was able to hold and release a 2.86 g piece of paper. We propose the soft actuator driven by the EHD pump, and expand the possibility to miniaturize the size of soft robots.
{"title":"Active suction cup actuated by ElectroHydroDynamics phenomenon","authors":"Yu Kuwajima, Hiroki Shigemune, Vito Cacucciolo, M. Cianchetti, C. Laschi, S. Maeda","doi":"10.1109/IROS.2017.8202195","DOIUrl":"https://doi.org/10.1109/IROS.2017.8202195","url":null,"abstract":"Designing and manufacturing actuators using soft materials are among the most important subjects for future robotics. In nature, animals made by soft tissues such as the octopus have attracted the attention of the robotics community in the last years. Suckers (or suction cups) are one of the most important and peculiar organs of the octopus body, giving it the ability to apply high forces on the external environment. The integration of suction cups in soft robots can enhance their ability to manipulate objects and interact with the environment similarly to what the octopus does. However, artificial suction cups are currently actuated using fluid pressure so most of them require external compressors, which will greatly increase the size of the soft robot. In this work, we proposed the use of the ElectroHydroDynamics (EHD) principle to actuate a suction cup. EHD is a fluidic phenomenon coupled with electrochemical reaction that can induce pressure through the application of a high-intensity electric field. We succeeded in developing a suction cup driven by EHD keeping the whole structure extremely simple, fabricated by using a 3D printer and a cutting plotter. We can control the adhesion of the suction cup by controlling the direction of the fluidic flow in our EHD pump. Thanks to a symmetrical arrangement of the electrodes, composed by plates parallel to the direction of the channel, we can change the direction of the flow by changing the sign of the applied voltage. We obtained the pressure of 643 Pa in one unit of EHD pump and pressure of 1428 Pa in five units of EHD pump applying 6 kV. The suction cup actuator was able to hold and release a 2.86 g piece of paper. We propose the soft actuator driven by the EHD pump, and expand the possibility to miniaturize the size of soft robots.","PeriodicalId":6658,"journal":{"name":"2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"28 1","pages":"470-475"},"PeriodicalIF":0.0,"publicationDate":"2017-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82852192","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-12-13DOI: 10.1109/IROS.2017.8205958
Naman Patel, A. Choromańska, P. Krishnamurthy, F. Khorrami
We present a novel end-to-end learning framework to enable ground vehicles to autonomously navigate unknown environments by fusing raw pixels from cameras and depth measurements from a LiDAR. A deep neural network architecture is introduced to effectively perform modality fusion and reliably predict steering commands even in the presence of sensor failures. The proposed network is trained on our own dataset, from LiDAR and a camera mounted on a UGV taken in an indoor corridor environment. Comprehensive experimental evaluation to demonstrate the robustness of our network architecture is performed to show that the proposed deep learning neural network is able to autonomously navigate in the corridor environment. Furthermore, we demonstrate that the fusion of the camera and LiDAR modalities provides further benefits beyond robustness to sensor failures. Specifically, the multimodal fused system shows a potential to navigate around static and dynamic obstacles and to handle changes in environment geometry without being trained for these tasks.
{"title":"Sensor modality fusion with CNNs for UGV autonomous driving in indoor environments","authors":"Naman Patel, A. Choromańska, P. Krishnamurthy, F. Khorrami","doi":"10.1109/IROS.2017.8205958","DOIUrl":"https://doi.org/10.1109/IROS.2017.8205958","url":null,"abstract":"We present a novel end-to-end learning framework to enable ground vehicles to autonomously navigate unknown environments by fusing raw pixels from cameras and depth measurements from a LiDAR. A deep neural network architecture is introduced to effectively perform modality fusion and reliably predict steering commands even in the presence of sensor failures. The proposed network is trained on our own dataset, from LiDAR and a camera mounted on a UGV taken in an indoor corridor environment. Comprehensive experimental evaluation to demonstrate the robustness of our network architecture is performed to show that the proposed deep learning neural network is able to autonomously navigate in the corridor environment. Furthermore, we demonstrate that the fusion of the camera and LiDAR modalities provides further benefits beyond robustness to sensor failures. Specifically, the multimodal fused system shows a potential to navigate around static and dynamic obstacles and to handle changes in environment geometry without being trained for these tasks.","PeriodicalId":6658,"journal":{"name":"2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"57 1","pages":"1531-1536"},"PeriodicalIF":0.0,"publicationDate":"2017-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72887783","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-12-13DOI: 10.1109/IROS.2017.8206617
Ayanori Yorozu, Ayumi Tanigawa, Masaki Takahashi
In this paper, dual-task performance assessment robot (DAR) using projection is developed. Falling is a common problem in the growing elderly population. Fall-risk assessment systems have proven to be helpful in community-based fall prevention programs. One of the risk factors of falling is the deterioration of a person's dual-task performance. For example, gait training, which enhances both motor and cognitive functions, is a multi-target stepping task (MTST), in which participants step on assigned colored targets. To evaluate the dual-task performance during MTST in human living space, projection mapping and robot navigation to maintain a safe distance from the participant are key technologies. Projection mapping is used to evaluate the long-distance dual-task performance, where MTST images are displayed on the floor by the moving DAR. To evaluate the accuracy of the projected target position, experiments for MTST projection using the moving DAR and video analysis are carried out. Additionally, to verify the validity of the MTST by the moving DAR at a constant speed, experiments with several young participants are carried out.
{"title":"Dual-task performance assessment robot","authors":"Ayanori Yorozu, Ayumi Tanigawa, Masaki Takahashi","doi":"10.1109/IROS.2017.8206617","DOIUrl":"https://doi.org/10.1109/IROS.2017.8206617","url":null,"abstract":"In this paper, dual-task performance assessment robot (DAR) using projection is developed. Falling is a common problem in the growing elderly population. Fall-risk assessment systems have proven to be helpful in community-based fall prevention programs. One of the risk factors of falling is the deterioration of a person's dual-task performance. For example, gait training, which enhances both motor and cognitive functions, is a multi-target stepping task (MTST), in which participants step on assigned colored targets. To evaluate the dual-task performance during MTST in human living space, projection mapping and robot navigation to maintain a safe distance from the participant are key technologies. Projection mapping is used to evaluate the long-distance dual-task performance, where MTST images are displayed on the floor by the moving DAR. To evaluate the accuracy of the projected target position, experiments for MTST projection using the moving DAR and video analysis are carried out. Additionally, to verify the validity of the MTST by the moving DAR at a constant speed, experiments with several young participants are carried out.","PeriodicalId":6658,"journal":{"name":"2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"6 1","pages":"6929-6934"},"PeriodicalIF":0.0,"publicationDate":"2017-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80887540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}