Pub Date : 2018-10-01DOI: 10.1109/IROS.2018.8593649
Sandy H. Huang, K. Bhatia, P. Abbeel, A. Dragan
In order to effectively interact with or supervise a robot, humans need to have an accurate mental model of its capabilities and how it acts. Learned neural network policies make that particularly challenging. We propose an approach for helping end-users build a mental model of such policies. Our key observation is that for most tasks, the essence of the policy is captured in a few critical states: states in which it is very important to take a certain action. Our user studies show that if the robot shows a human what its understanding of the task's critical states is, then the human can make a more informed decision about whether to deploy the policy, and if she does deploy it, when she needs to take control from it at execution time.
{"title":"Establishing Appropriate Trust via Critical States","authors":"Sandy H. Huang, K. Bhatia, P. Abbeel, A. Dragan","doi":"10.1109/IROS.2018.8593649","DOIUrl":"https://doi.org/10.1109/IROS.2018.8593649","url":null,"abstract":"In order to effectively interact with or supervise a robot, humans need to have an accurate mental model of its capabilities and how it acts. Learned neural network policies make that particularly challenging. We propose an approach for helping end-users build a mental model of such policies. Our key observation is that for most tasks, the essence of the policy is captured in a few critical states: states in which it is very important to take a certain action. Our user studies show that if the robot shows a human what its understanding of the task's critical states is, then the human can make a more informed decision about whether to deploy the policy, and if she does deploy it, when she needs to take control from it at execution time.","PeriodicalId":6640,"journal":{"name":"2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"49 1","pages":"3929-3936"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76419416","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-10-01DOI: 10.1109/IROS.2018.8594431
N. Bohorquez, Pierre-Brice Wieber
We want to enable the robot to reorient its feet in order to face its direction of motion. Model Predictive Control schemes for biped walking usually assume fixed feet rotation since adapting them online leads to a nonlinear problem. Nonlinear solvers do not guarantee the satisfaction of nonlinear constraints at every iterate and this can be problematic for the real-time operation of robots. We propose to define safe linear constraints that are always inside the intersection of the nonlinear constraints. We make simulations of the robot walking on a crowd and compare the performance of the proposed method with respect to the original nonlinear problem solved as a Sequential Quadratic Program.
{"title":"Adaptive step rotation in biped walking","authors":"N. Bohorquez, Pierre-Brice Wieber","doi":"10.1109/IROS.2018.8594431","DOIUrl":"https://doi.org/10.1109/IROS.2018.8594431","url":null,"abstract":"We want to enable the robot to reorient its feet in order to face its direction of motion. Model Predictive Control schemes for biped walking usually assume fixed feet rotation since adapting them online leads to a nonlinear problem. Nonlinear solvers do not guarantee the satisfaction of nonlinear constraints at every iterate and this can be problematic for the real-time operation of robots. We propose to define safe linear constraints that are always inside the intersection of the nonlinear constraints. We make simulations of the robot walking on a crowd and compare the performance of the proposed method with respect to the original nonlinear problem solved as a Sequential Quadratic Program.","PeriodicalId":6640,"journal":{"name":"2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"89 1","pages":"720-725"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75964369","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-10-01DOI: 10.1109/IROS.2018.8593671
Azhar Aulia Saputra, W. Chin, János Botzheim, N. Kubota
In this paper, we present a novel biologically inspired evolving neural oscillator for quadruped robot locomotion to minimize constraints during the locomotion process. The proposed sensory-motor coordination model is formed by the interconnection between motor and sensory neurons. The model utilizes Bacterial Programming to reconstruct the number of joints and neurons in each joint based on environmental conditions. Bacterial Programming is inspired by the evolutionary process of bacteria that includes bacterial mutation and gene transfer process. In this system, either the number of joints, the number of neurons, or the interconnection structure are changing dynamically depending on the sensory information from sensors equipped on the robot. The proposed model is simulated in computer for realizing the optimization process and the optimized structure is then applied to a real quadruped robot for locomotion process. The optimizing process is based on tree structure optimization to simplify the sensory-motor interconnection structure. The proposed model was validated by series of real robot experiments in different environmental conditions.
{"title":"Evolving a Sensory-Motor Interconnection for Dynamic Quadruped Robot Locomotion Behavior","authors":"Azhar Aulia Saputra, W. Chin, János Botzheim, N. Kubota","doi":"10.1109/IROS.2018.8593671","DOIUrl":"https://doi.org/10.1109/IROS.2018.8593671","url":null,"abstract":"In this paper, we present a novel biologically inspired evolving neural oscillator for quadruped robot locomotion to minimize constraints during the locomotion process. The proposed sensory-motor coordination model is formed by the interconnection between motor and sensory neurons. The model utilizes Bacterial Programming to reconstruct the number of joints and neurons in each joint based on environmental conditions. Bacterial Programming is inspired by the evolutionary process of bacteria that includes bacterial mutation and gene transfer process. In this system, either the number of joints, the number of neurons, or the interconnection structure are changing dynamically depending on the sensory information from sensors equipped on the robot. The proposed model is simulated in computer for realizing the optimization process and the optimized structure is then applied to a real quadruped robot for locomotion process. The optimizing process is based on tree structure optimization to simplify the sensory-motor interconnection structure. The proposed model was validated by series of real robot experiments in different environmental conditions.","PeriodicalId":6640,"journal":{"name":"2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"4 1","pages":"7089-7095"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87502087","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-10-01DOI: 10.1109/IROS.2018.8593612
André Naz, Benoît Piranda, J. Bourgeois, S. Goldstein
Among the diversity of the existing modular robotic systems, we consider in this paper the subset of distributed modular robotic ensembles composed of resource-constrained identical modules that are organized in a lattice structure and which can only communicate with neighboring modules. These modular robotic ensembles form asynchronous distributed embedded systems. In many algorithms dedicated to distributed system coordination, a specific role has to be played by a leader, i.e., a single node in the system. This leader can be elected using various criteria. A possible strategy is to elect a center node, i.e., a node that has the minimum distance to all the other nodes. Indeed, this node is ideally located to communicate with all the others and this leads to better performance in many algorithms. The contribution of this paper is to propose the $k$-BFS SumSweep algorithm designed to elect an approximate-center node. We evaluated our algorithm both on hardware modular robots and in a simulator for large ensembles of robots. Experimental results show that k-BFS SumSweep is often the most accurate approximation algorithm (with an average relative accuracy between 90% to 100%) while using the fewest messages in large-scale systems, requiring only a modest amount of memory per node, and converging in a reasonable length of time.
{"title":"Electing an Approximate Center in a Huge Modular Robot with the k-BFS SumSweep Algorithm","authors":"André Naz, Benoît Piranda, J. Bourgeois, S. Goldstein","doi":"10.1109/IROS.2018.8593612","DOIUrl":"https://doi.org/10.1109/IROS.2018.8593612","url":null,"abstract":"Among the diversity of the existing modular robotic systems, we consider in this paper the subset of distributed modular robotic ensembles composed of resource-constrained identical modules that are organized in a lattice structure and which can only communicate with neighboring modules. These modular robotic ensembles form asynchronous distributed embedded systems. In many algorithms dedicated to distributed system coordination, a specific role has to be played by a leader, i.e., a single node in the system. This leader can be elected using various criteria. A possible strategy is to elect a center node, i.e., a node that has the minimum distance to all the other nodes. Indeed, this node is ideally located to communicate with all the others and this leads to better performance in many algorithms. The contribution of this paper is to propose the $k$-BFS SumSweep algorithm designed to elect an approximate-center node. We evaluated our algorithm both on hardware modular robots and in a simulator for large ensembles of robots. Experimental results show that k-BFS SumSweep is often the most accurate approximation algorithm (with an average relative accuracy between 90% to 100%) while using the fewest messages in large-scale systems, requiring only a modest amount of memory per node, and converging in a reasonable length of time.","PeriodicalId":6640,"journal":{"name":"2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"47 1","pages":"4825-4832"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87799380","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-10-01DOI: 10.1109/IROS.2018.8593548
Felix H. Kong, I. Manchester
Dynamic walking robots have the potential for efficient and lifelike locomotion, but computing efficient gaits and tracking them is difficult in the presence of under-modeling. Iterative Learning Control (ILC) is a method to learn the control signal to track a periodic reference over several attempts, augmenting a model with online data. Terminal ILC (TILC), a variant of ILC, allows other performance objectives to be addressed at the cost of ignoring parts of the reference. However, dynamic walking robot gaits are not necessarily periodic in time. In this paper, we adapt TILC to jointly optimize final foot placement and energy efficiency on dynamic walking robots by indexing by a phase variable instead of time, yielding “phase-indexed TILC” (θ - TILC). When implemented on a five-link walker in simulation, θ- TILC learns a more energy-efficient walking motion compared to traditional time-indexed TILC.
{"title":"Iterative Learning of Energy-Efficient Dynamic Walking Gaits","authors":"Felix H. Kong, I. Manchester","doi":"10.1109/IROS.2018.8593548","DOIUrl":"https://doi.org/10.1109/IROS.2018.8593548","url":null,"abstract":"Dynamic walking robots have the potential for efficient and lifelike locomotion, but computing efficient gaits and tracking them is difficult in the presence of under-modeling. Iterative Learning Control (ILC) is a method to learn the control signal to track a periodic reference over several attempts, augmenting a model with online data. Terminal ILC (TILC), a variant of ILC, allows other performance objectives to be addressed at the cost of ignoring parts of the reference. However, dynamic walking robot gaits are not necessarily periodic in time. In this paper, we adapt TILC to jointly optimize final foot placement and energy efficiency on dynamic walking robots by indexing by a phase variable instead of time, yielding “phase-indexed TILC” (θ - TILC). When implemented on a five-link walker in simulation, θ- TILC learns a more energy-efficient walking motion compared to traditional time-indexed TILC.","PeriodicalId":6640,"journal":{"name":"2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"10 1","pages":"3815-3820"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87823847","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-10-01DOI: 10.1109/IROS.2018.8594088
Mennatullah Siam, Sara Elkerdawy, M. Gamal, Moemen Abdel-Razek, Martin Jägersand, Hong Zhang
Real-time Segmentation is of crucial importance to robotics related applications such as autonomous driving, driving assisted systems, and traffic monitoring from unmanned aerial vehicles imagery. We propose a novel two-stream convolutional network for motion segmentation, which exploits flow and geometric cues to balance the accuracy and computational efficiency trade-offs. The geometric cues take advantage of the domain knowledge of the application. In case of mostly planar scenes from high altitude unmanned aerial vehicles (UAVs), homography compensated flow is used. While in the case of urban scenes in autonomous driving, with GPS/IMU sensory data available, sparse projected depth estimates and odometry information are used. The network provides 4.7⨯ speedup over the state of the art networks in motion segmentation from 153ms to 36ms, at the expense of a reduction in the segmentation accuracy in terms of pixel boundaries. This enables the network to perform real-time on a Jetson T⨯2. In order to recuperate some of the accuracy loss, geometric priors is used while still achieving a much improved computational efficiency with respect to the state-of-the-art. The usage of geometric priors improved the segmentation in UAV imagery by 5.2 % using the metric of IoU over the baseline network. While on KITTI-MoSeg the sparse depth estimates improved the segmentation by 12.5 % over the baseline. Our proposed motion segmentation solution is verified on the popular KITTI and VIVID datasets, with additional labels we have produced. The code for our work is publicly available at11https://github.com/MSiam/RTMotSeg_Geom
{"title":"Real-Time Segmentation with Appearance, Motion and Geometry","authors":"Mennatullah Siam, Sara Elkerdawy, M. Gamal, Moemen Abdel-Razek, Martin Jägersand, Hong Zhang","doi":"10.1109/IROS.2018.8594088","DOIUrl":"https://doi.org/10.1109/IROS.2018.8594088","url":null,"abstract":"Real-time Segmentation is of crucial importance to robotics related applications such as autonomous driving, driving assisted systems, and traffic monitoring from unmanned aerial vehicles imagery. We propose a novel two-stream convolutional network for motion segmentation, which exploits flow and geometric cues to balance the accuracy and computational efficiency trade-offs. The geometric cues take advantage of the domain knowledge of the application. In case of mostly planar scenes from high altitude unmanned aerial vehicles (UAVs), homography compensated flow is used. While in the case of urban scenes in autonomous driving, with GPS/IMU sensory data available, sparse projected depth estimates and odometry information are used. The network provides 4.7⨯ speedup over the state of the art networks in motion segmentation from 153ms to 36ms, at the expense of a reduction in the segmentation accuracy in terms of pixel boundaries. This enables the network to perform real-time on a Jetson T⨯2. In order to recuperate some of the accuracy loss, geometric priors is used while still achieving a much improved computational efficiency with respect to the state-of-the-art. The usage of geometric priors improved the segmentation in UAV imagery by 5.2 % using the metric of IoU over the baseline network. While on KITTI-MoSeg the sparse depth estimates improved the segmentation by 12.5 % over the baseline. Our proposed motion segmentation solution is verified on the popular KITTI and VIVID datasets, with additional labels we have produced. The code for our work is publicly available at11https://github.com/MSiam/RTMotSeg_Geom","PeriodicalId":6640,"journal":{"name":"2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"2021 1","pages":"5793-5800"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86822247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-10-01DOI: 10.1109/IROS.2018.8593953
Giseop Kim, Ayoung Kim
Compared to diverse feature detectors and descriptors used for visual scenes, describing a place using structural information is relatively less reported. Recent advances in simultaneous localization and mapping (SLAM) provides dense 3D maps of the environment and the localization is proposed by diverse sensors. Toward the global localization based on the structural information, we propose Scan Context, a non-histogram-based global descriptor from 3D Light Detection and Ranging (LiDAR) scans. Unlike previously reported methods, the proposed approach directly records a 3D structure of a visible space from a sensor and does not rely on a histogram or on prior training. In addition, this approach proposes the use of a similarity score to calculate the distance between two scan contexts and also a two-phase search algorithm to efficiently detect a loop. Scan context and its search algorithm make loop-detection invariant to LiDAR viewpoint changes so that loops can be detected in places such as reverse revisit and corner. Scan context performance has been evaluated via various benchmark datasets of 3D LiDAR scans, and the proposed method shows a sufficiently improved performance.
{"title":"Scan Context: Egocentric Spatial Descriptor for Place Recognition Within 3D Point Cloud Map","authors":"Giseop Kim, Ayoung Kim","doi":"10.1109/IROS.2018.8593953","DOIUrl":"https://doi.org/10.1109/IROS.2018.8593953","url":null,"abstract":"Compared to diverse feature detectors and descriptors used for visual scenes, describing a place using structural information is relatively less reported. Recent advances in simultaneous localization and mapping (SLAM) provides dense 3D maps of the environment and the localization is proposed by diverse sensors. Toward the global localization based on the structural information, we propose Scan Context, a non-histogram-based global descriptor from 3D Light Detection and Ranging (LiDAR) scans. Unlike previously reported methods, the proposed approach directly records a 3D structure of a visible space from a sensor and does not rely on a histogram or on prior training. In addition, this approach proposes the use of a similarity score to calculate the distance between two scan contexts and also a two-phase search algorithm to efficiently detect a loop. Scan context and its search algorithm make loop-detection invariant to LiDAR viewpoint changes so that loops can be detected in places such as reverse revisit and corner. Scan context performance has been evaluated via various benchmark datasets of 3D LiDAR scans, and the proposed method shows a sufficiently improved performance.","PeriodicalId":6640,"journal":{"name":"2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"404 1","pages":"4802-4809"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86833804","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-10-01DOI: 10.1109/IROS.2018.8593971
Md. Alimoor Reza, J. Kosecka, P. David
This paper introduces the problem of long-range monocular depth estimation for outdoor urban environments. Range sensors and traditional depth estimation algorithms (both stereo and single view) predict depth for distances of less than 100 meters in outdoor settings and 10 meters in indoor settings. The shortcomings of outdoor single view methods that use learning approaches are, to some extent, due to the lack of long-range ground truth training data, which in turn is due to limitations of range sensors. To circumvent this, we first propose a novel strategy for generating synthetic long-range ground truth depth data. We utilize Google Earth images to reconstruct large-scale 3D models of different cities with proper scale. The acquired repository of 3D models and associated RGB views along with their long-range depth renderings are used as training data for depth prediction. We then train two deep neural network models for long-range depth estimation: i) a Convolutional Neural Network (CNN) and ii) a Generative Adversarial Network (GAN). We found in our experiments that the GAN model predicts depth more accurately. We plan to open-source the database and the baseline models for public use.
{"title":"FarSight: Long-Range Depth Estimation from Outdoor Images","authors":"Md. Alimoor Reza, J. Kosecka, P. David","doi":"10.1109/IROS.2018.8593971","DOIUrl":"https://doi.org/10.1109/IROS.2018.8593971","url":null,"abstract":"This paper introduces the problem of long-range monocular depth estimation for outdoor urban environments. Range sensors and traditional depth estimation algorithms (both stereo and single view) predict depth for distances of less than 100 meters in outdoor settings and 10 meters in indoor settings. The shortcomings of outdoor single view methods that use learning approaches are, to some extent, due to the lack of long-range ground truth training data, which in turn is due to limitations of range sensors. To circumvent this, we first propose a novel strategy for generating synthetic long-range ground truth depth data. We utilize Google Earth images to reconstruct large-scale 3D models of different cities with proper scale. The acquired repository of 3D models and associated RGB views along with their long-range depth renderings are used as training data for depth prediction. We then train two deep neural network models for long-range depth estimation: i) a Convolutional Neural Network (CNN) and ii) a Generative Adversarial Network (GAN). We found in our experiments that the GAN model predicts depth more accurately. We plan to open-source the database and the baseline models for public use.","PeriodicalId":6640,"journal":{"name":"2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"39 1","pages":"4751-4757"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87089678","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-10-01DOI: 10.1109/IROS.2018.8593981
E. Fischell, Oscar Viquez, H. Schmidt
Autonomous underwater vehicles (AUVs) pose significant communication challenges: vehicles are submerged for periods of time in which speed-of-light communication is impossible. This is a particular problem on low-cost AUV platforms, on which acoustic modems are not available to get vehicle state or provide re-deploy commands. We investigate one possible method of providing operators with a communication line to these vehicles by using noise underwater to both classify behavior of submerged vehicles and to command them. In this scheme, processing of data from hydrophone arrays provide operators with AUV mode estimates and AUVs with surface vehicle behavior updates. Simulation studies were used to characterize trajectories for simple transect versus loiter behaviors based on the bearing and time to intercept (TTI). A classifier based on K-nearest-neighbor with dynamic time warping as a distance metric was used to classify simulation data. The simulation-based classifier was then applied to classify bearing tracking data from passive tracking of a loitering AUV and bearing and TTI data from passive tracking of a transecting boat based on field array data. Experiment data was classified with 76 % accuracy using bearing-only data, 96% accuracy for TTI -only data and 99 % accuracy for combined classification. The techniques developed here could be used for AUV cuing by surface vessels and monitoring of AUV behavior.
{"title":"Passive acoustic tracking for behavior mode classification between surface and underwater vehicles","authors":"E. Fischell, Oscar Viquez, H. Schmidt","doi":"10.1109/IROS.2018.8593981","DOIUrl":"https://doi.org/10.1109/IROS.2018.8593981","url":null,"abstract":"Autonomous underwater vehicles (AUVs) pose significant communication challenges: vehicles are submerged for periods of time in which speed-of-light communication is impossible. This is a particular problem on low-cost AUV platforms, on which acoustic modems are not available to get vehicle state or provide re-deploy commands. We investigate one possible method of providing operators with a communication line to these vehicles by using noise underwater to both classify behavior of submerged vehicles and to command them. In this scheme, processing of data from hydrophone arrays provide operators with AUV mode estimates and AUVs with surface vehicle behavior updates. Simulation studies were used to characterize trajectories for simple transect versus loiter behaviors based on the bearing and time to intercept (TTI). A classifier based on K-nearest-neighbor with dynamic time warping as a distance metric was used to classify simulation data. The simulation-based classifier was then applied to classify bearing tracking data from passive tracking of a loitering AUV and bearing and TTI data from passive tracking of a transecting boat based on field array data. Experiment data was classified with 76 % accuracy using bearing-only data, 96% accuracy for TTI -only data and 99 % accuracy for combined classification. The techniques developed here could be used for AUV cuing by surface vessels and monitoring of AUV behavior.","PeriodicalId":6640,"journal":{"name":"2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"16 1","pages":"2383-2388"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88024764","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-10-01DOI: 10.1109/IROS.2018.8593852
Kieran Gilday, Josie Hughes, F. Iida
Prefabrication of structures is currently used in a limited capacity, due to the lack of flexibility, despite the potential cost and speed advantages. Autonomous flexible reassembly enables structures to be developed which can be continuously and iteratively dis-assembled and re-assembled providing far more flexibility in comparison to single shot pre-fabrication methods. Dis-assembly of structures should be considered when assembling, due to the asymmetry of assembly and dis-assembly processes, to ensure structures can be recycled and re-assembled. This allows for agile development, significantly reducing the time and resource usage during the build process. In this work, a framework for flexible re-assembly is developed and a robotic platform is developed to implement and test this framework with simple Lego bricks. The tradeoffs in terms of time, resource use and probability of success of this new assembly method can be understood by using a cost function to compare to alternative fabrication methods.
{"title":"Achieving Flexible Assembly Using Autonomous Robotic Systems","authors":"Kieran Gilday, Josie Hughes, F. Iida","doi":"10.1109/IROS.2018.8593852","DOIUrl":"https://doi.org/10.1109/IROS.2018.8593852","url":null,"abstract":"Prefabrication of structures is currently used in a limited capacity, due to the lack of flexibility, despite the potential cost and speed advantages. Autonomous flexible reassembly enables structures to be developed which can be continuously and iteratively dis-assembled and re-assembled providing far more flexibility in comparison to single shot pre-fabrication methods. Dis-assembly of structures should be considered when assembling, due to the asymmetry of assembly and dis-assembly processes, to ensure structures can be recycled and re-assembled. This allows for agile development, significantly reducing the time and resource usage during the build process. In this work, a framework for flexible re-assembly is developed and a robotic platform is developed to implement and test this framework with simple Lego bricks. The tradeoffs in terms of time, resource use and probability of success of this new assembly method can be understood by using a cost function to compare to alternative fabrication methods.","PeriodicalId":6640,"journal":{"name":"2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"38 1","pages":"1-9"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88442935","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}