Pub Date : 2022-05-23DOI: 10.1109/icra46639.2022.9812172
Yuejiang Su, A. Lanzon
This paper investigates the cooperative control problem of choke-point navigation for multiple quadcopters when only their subgroup is equipped with obstacle detecting sensors. We define a quadcopter as a leader if it is equipped with an obstacle detecting sensor; otherwise, it is a follower. In addition, we introduce a virtual leader agent to create the group motion. First, we apply the leader-follower approach and propose a formation-containment tracking controller for multiple quadcopters to track the time-varying velocity of the virtual leader agent. At the same time, the leader quadcopters form the prescribed formation while the follower quadcopters converge inside a safe region, which is the convex hull spanned by those leaders. Then, we introduce a scaling vector into the displacement-based formation constraints. When the leader quadcopters identify the choke-point via their obstacle detecting sensors, they update the scaling variable to adjust the size of the formation (i.e. the safe region) and guide all quadcopters to safely pass through the choke-point. The proposed cooperative controllers are distributed because each quadcopter's control command only relies on the information states from its neighbours. Finally, two autonomous flight experiments, including formation-containment tracking and choke-point navigation, are provided to validate the effectiveness of the proposed cooperative control laws.
{"title":"Formation-containment tracking and scaling for multiple quadcopters with an application to choke-point navigation","authors":"Yuejiang Su, A. Lanzon","doi":"10.1109/icra46639.2022.9812172","DOIUrl":"https://doi.org/10.1109/icra46639.2022.9812172","url":null,"abstract":"This paper investigates the cooperative control problem of choke-point navigation for multiple quadcopters when only their subgroup is equipped with obstacle detecting sensors. We define a quadcopter as a leader if it is equipped with an obstacle detecting sensor; otherwise, it is a follower. In addition, we introduce a virtual leader agent to create the group motion. First, we apply the leader-follower approach and propose a formation-containment tracking controller for multiple quadcopters to track the time-varying velocity of the virtual leader agent. At the same time, the leader quadcopters form the prescribed formation while the follower quadcopters converge inside a safe region, which is the convex hull spanned by those leaders. Then, we introduce a scaling vector into the displacement-based formation constraints. When the leader quadcopters identify the choke-point via their obstacle detecting sensors, they update the scaling variable to adjust the size of the formation (i.e. the safe region) and guide all quadcopters to safely pass through the choke-point. The proposed cooperative controllers are distributed because each quadcopter's control command only relies on the information states from its neighbours. Finally, two autonomous flight experiments, including formation-containment tracking and choke-point navigation, are provided to validate the effectiveness of the proposed cooperative control laws.","PeriodicalId":341244,"journal":{"name":"2022 International Conference on Robotics and Automation (ICRA)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114828299","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-05-23DOI: 10.1109/ICRA46639.2022.9811858
Ganghun Lee, Minji Kim, M. Lee, Byoung-Tak Zhang
We present an automated learning framework for a robotic sketching agent that is capable of learning stroke-based rendering and motor control simultaneously. We formulate the robotic sketching problem as a deep decoupled hierarchical reinforcement learning; two policies for stroke-based rendering and motor control are learned independently to achieve sub-tasks for drawing, and form a hierarchy when cooperating for real-world drawing. Without hand-crafted features, drawing sequences or trajectories, and inverse kinematics, the proposed method trains the robotic sketching agent from scratch. We performed experiments with a 6-DoF robot arm with 2F gripper to sketch doodles. Our experimental results show that the two policies successfully learned the sub-tasks and collaborated to sketch the target images. Also, the robustness and flexibility were examined by varying drawing tools and surfaces.
{"title":"From Scratch to Sketch: Deep Decoupled Hierarchical Reinforcement Learning for Robotic Sketching Agent","authors":"Ganghun Lee, Minji Kim, M. Lee, Byoung-Tak Zhang","doi":"10.1109/ICRA46639.2022.9811858","DOIUrl":"https://doi.org/10.1109/ICRA46639.2022.9811858","url":null,"abstract":"We present an automated learning framework for a robotic sketching agent that is capable of learning stroke-based rendering and motor control simultaneously. We formulate the robotic sketching problem as a deep decoupled hierarchical reinforcement learning; two policies for stroke-based rendering and motor control are learned independently to achieve sub-tasks for drawing, and form a hierarchy when cooperating for real-world drawing. Without hand-crafted features, drawing sequences or trajectories, and inverse kinematics, the proposed method trains the robotic sketching agent from scratch. We performed experiments with a 6-DoF robot arm with 2F gripper to sketch doodles. Our experimental results show that the two policies successfully learned the sub-tasks and collaborated to sketch the target images. Also, the robustness and flexibility were examined by varying drawing tools and surfaces.","PeriodicalId":341244,"journal":{"name":"2022 International Conference on Robotics and Automation (ICRA)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114886581","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-05-23DOI: 10.1109/icra46639.2022.9811537
Kemal Bektas, H. I. Bozma
This paper is focused on safe mapless navigation of mobile robots in unknown and possibly complex environments containing both internal and dynamic obstacles. We present a novel modular approach that combines the strengths of artificial potential functions (APF) with deep reinforcement learning. Differing from related work, the robot learns how to adjust the two input parameters of the APF controller as necessary through soft actor-critic algorithm. Environmental complexity measures are introduced in order to ensure that the robot's training covers a range of learning scenarios that vary in regard to maneuvering difficulty. Our experimental results show that differing from the classical navigation methods and end-to-end models, the robot can navigate successfully on its own even in complex scenarios with moving entities without requiring any maps.
{"title":"APF-RL: Safe Mapless Navigation in Unknown Environments","authors":"Kemal Bektas, H. I. Bozma","doi":"10.1109/icra46639.2022.9811537","DOIUrl":"https://doi.org/10.1109/icra46639.2022.9811537","url":null,"abstract":"This paper is focused on safe mapless navigation of mobile robots in unknown and possibly complex environments containing both internal and dynamic obstacles. We present a novel modular approach that combines the strengths of artificial potential functions (APF) with deep reinforcement learning. Differing from related work, the robot learns how to adjust the two input parameters of the APF controller as necessary through soft actor-critic algorithm. Environmental complexity measures are introduced in order to ensure that the robot's training covers a range of learning scenarios that vary in regard to maneuvering difficulty. Our experimental results show that differing from the classical navigation methods and end-to-end models, the robot can navigate successfully on its own even in complex scenarios with moving entities without requiring any maps.","PeriodicalId":341244,"journal":{"name":"2022 International Conference on Robotics and Automation (ICRA)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114661600","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-05-23DOI: 10.1109/icra46639.2022.9811887
Justin Svegliato, Connor Basich, Sandhya Saisubramanian, S. Zilberstein
Although experts carefully specify the high-level decision-making models in autonomous systems, it is infeasible to guarantee safety across every scenario during operation. We therefore propose a safety metareasoning system that optimizes the severity of the system's safety concerns and the interference to the system's task: the system executes in parallel a task process that completes a specified task and safety processes that each address a specified safety concern with a conflict resolver for arbitration. This paper offers a formal definition of a safety metareasoning system, a recommendation algorithm for a safety process, an arbitration algorithm for a conflict resolver, an application of our approach to planetary rover exploration, and a demonstration that our approach is effective in simulation.
{"title":"Metareasoning for Safe Decision Making in Autonomous Systems","authors":"Justin Svegliato, Connor Basich, Sandhya Saisubramanian, S. Zilberstein","doi":"10.1109/icra46639.2022.9811887","DOIUrl":"https://doi.org/10.1109/icra46639.2022.9811887","url":null,"abstract":"Although experts carefully specify the high-level decision-making models in autonomous systems, it is infeasible to guarantee safety across every scenario during operation. We therefore propose a safety metareasoning system that optimizes the severity of the system's safety concerns and the interference to the system's task: the system executes in parallel a task process that completes a specified task and safety processes that each address a specified safety concern with a conflict resolver for arbitration. This paper offers a formal definition of a safety metareasoning system, a recommendation algorithm for a safety process, an arbitration algorithm for a conflict resolver, an application of our approach to planetary rover exploration, and a demonstration that our approach is effective in simulation.","PeriodicalId":341244,"journal":{"name":"2022 International Conference on Robotics and Automation (ICRA)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117346715","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-05-23DOI: 10.1109/icra46639.2022.9812184
Liam Schramm, Abdeslam Boularias
Optimal motion planning is a long-studied problem with a wide range of applications in robotics, from grasping to navigation. While sampling-based motion planning methods have made solving such problems significantly more feasible, these methods still often struggle in high-dimensional spaces wherein exploration is computationally costly. In this paper, we propose a new motion planning algorithm that reduces the computational burden of the exploration process. The proposed algorithm utilizes a guidance policy acquired offline through model-free reinforcement learning. The guidance policy is used to bias the exploration process in motion planning and to guide it toward promising regions of the state space. Moreover, we show that the gradients of the corresponding learned value function can be used to locally fine-tune the sampled states. We empirically demonstrate that the proposed approach can significantly reduce planning time and improve success rate and path quality.
{"title":"Learning-Guided Exploration for Efficient Sampling-Based Motion Planning in High Dimensions","authors":"Liam Schramm, Abdeslam Boularias","doi":"10.1109/icra46639.2022.9812184","DOIUrl":"https://doi.org/10.1109/icra46639.2022.9812184","url":null,"abstract":"Optimal motion planning is a long-studied problem with a wide range of applications in robotics, from grasping to navigation. While sampling-based motion planning methods have made solving such problems significantly more feasible, these methods still often struggle in high-dimensional spaces wherein exploration is computationally costly. In this paper, we propose a new motion planning algorithm that reduces the computational burden of the exploration process. The proposed algorithm utilizes a guidance policy acquired offline through model-free reinforcement learning. The guidance policy is used to bias the exploration process in motion planning and to guide it toward promising regions of the state space. Moreover, we show that the gradients of the corresponding learned value function can be used to locally fine-tune the sampled states. We empirically demonstrate that the proposed approach can significantly reduce planning time and improve success rate and path quality.","PeriodicalId":341244,"journal":{"name":"2022 International Conference on Robotics and Automation (ICRA)","volume":"122 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116413632","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-05-23DOI: 10.1109/icra46639.2022.9811705
Kaylene C. Stocking, D. McPherson, R. Matthew, C. Tomlin
When a robot observes another agent unexpectedly modifying their behavior, inferring the most likely cause is a valuable tool for maintaining safety and reacting appropriately. In this work, we present a novel method for inferring constraints that works on continuous, possibly sub-optimal demonstrations. We first learn a representation of the continuous-state maximum entropy trajectory distribution using deep reinforcement learning. We then use Monte Carlo sampling from this distribution to generate expected constraint violation probabilities and perform constraint inference. When the demonstrator's dynamics and objective function are known in advance, this process can be performed offline, allowing for real-time constraint inference at the moment demonstrations are observed. We evaluate our approach on two continuous dynamical systems: a 2-dimensional inverted pendulum model, and a 4-dimensional unicycle model that was successfully used for fast constraint inference on a 1/10 scale car remote-controlled by a human.
{"title":"Maximum Likelihood Constraint Inference on Continuous State Spaces","authors":"Kaylene C. Stocking, D. McPherson, R. Matthew, C. Tomlin","doi":"10.1109/icra46639.2022.9811705","DOIUrl":"https://doi.org/10.1109/icra46639.2022.9811705","url":null,"abstract":"When a robot observes another agent unexpectedly modifying their behavior, inferring the most likely cause is a valuable tool for maintaining safety and reacting appropriately. In this work, we present a novel method for inferring constraints that works on continuous, possibly sub-optimal demonstrations. We first learn a representation of the continuous-state maximum entropy trajectory distribution using deep reinforcement learning. We then use Monte Carlo sampling from this distribution to generate expected constraint violation probabilities and perform constraint inference. When the demonstrator's dynamics and objective function are known in advance, this process can be performed offline, allowing for real-time constraint inference at the moment demonstrations are observed. We evaluate our approach on two continuous dynamical systems: a 2-dimensional inverted pendulum model, and a 4-dimensional unicycle model that was successfully used for fast constraint inference on a 1/10 scale car remote-controlled by a human.","PeriodicalId":341244,"journal":{"name":"2022 International Conference on Robotics and Automation (ICRA)","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117190327","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-05-23DOI: 10.1109/icra46639.2022.9812390
Naotomo Tottori, Sora Sadamichi, S. Sakuma, Tomomi Tsubouchi, Y. Yamanishi
Cell fusion has been widely applied in scientific research for cancer immunotherapy, antibody production, and nuclear reprogramming of somatic cells, and therefore the cell fusion technique that enable us to precisely control the fusion process with high throughput manner has been desired. Here, we present a novel microfluidic method for automatic cell pairing by microdroplets, separation of droplets containing cells, and electrofusion of cells inside a droplet. The proposed microfluidic device mainly composed of three sequential function parts for (i) encapsulation of cells into a droplet by microfluidic droplet generator, (ii) separation of droplets containing cells from empty droplets through a micropillar array, and (iii) electrofusion of cells inside the droplets by applying a voltage during the droplet passing over the pair of electrodes. In the microfluidic device, cell-encapsulated and empty droplets were generated at the upstream cross-junction; they then entered the micropillar array, separating the cell-encapsulated droplets from empty droplets continuously. After separation, they passed over the electrode pairs, and were collected the outside of the microchannel. This continuous process for cell fusion would enable us to observe and isolate the target fused cells for cell analysis.
{"title":"On-chip Continuous Pairing, Separation and Electrofusion of Cells Using a Microdroplet","authors":"Naotomo Tottori, Sora Sadamichi, S. Sakuma, Tomomi Tsubouchi, Y. Yamanishi","doi":"10.1109/icra46639.2022.9812390","DOIUrl":"https://doi.org/10.1109/icra46639.2022.9812390","url":null,"abstract":"Cell fusion has been widely applied in scientific research for cancer immunotherapy, antibody production, and nuclear reprogramming of somatic cells, and therefore the cell fusion technique that enable us to precisely control the fusion process with high throughput manner has been desired. Here, we present a novel microfluidic method for automatic cell pairing by microdroplets, separation of droplets containing cells, and electrofusion of cells inside a droplet. The proposed microfluidic device mainly composed of three sequential function parts for (i) encapsulation of cells into a droplet by microfluidic droplet generator, (ii) separation of droplets containing cells from empty droplets through a micropillar array, and (iii) electrofusion of cells inside the droplets by applying a voltage during the droplet passing over the pair of electrodes. In the microfluidic device, cell-encapsulated and empty droplets were generated at the upstream cross-junction; they then entered the micropillar array, separating the cell-encapsulated droplets from empty droplets continuously. After separation, they passed over the electrode pairs, and were collected the outside of the microchannel. This continuous process for cell fusion would enable us to observe and isolate the target fused cells for cell analysis.","PeriodicalId":341244,"journal":{"name":"2022 International Conference on Robotics and Automation (ICRA)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125794180","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-05-23DOI: 10.1109/icra46639.2022.9811610
M. Rahimi, Yantao Shen, Cong Peng, Zhiming Liu, Fang Jiang
In this paper, to achieve the goal of aiding the blind and visually impaired (BVI) to read any text not written in Braille, a custom-built, finger-wearable, and electro-tactile based Braille reading system with its Rapid Optical Character Recognition (R-OCR) method is developed. The R-OCR is capable of processing text information in real time using a miniature fish-eye imaging device mounted at the finger-wearable system. This allows real-time translation of printed text to electro-Braille along with natural movement of user's fingertip as if reading any Braille sign or book. An electro-tactile neuro-stimulation feedback mechanism is further proposed and incorporated with the reading system, which facilitates a new opto-electrotactile-feedback-based text line tracking control approach that enables text line following by user's fingertip during reading. Extensive experiments were designed and conducted to test the ability of blindfolded participants to read through and follow the printed text lines based on this optoelectrotactile-feedback method. The experimental results show that as the outcome of the opto-electrotactile-feedback, the users who involved in the feedback loop were able to maintain their fingertips within a $2mm$ distance of the text while “reading” through a printed text line. Our work is a significant step to aid the BVI users with a portable means to read any printed texts to Braille through the following and the translation, whether in the digital realm or physically, on any surface.
{"title":"Opto-electrotactile Feedback Enabled Text-line Tracking Control for A Finger-wearable Reading Aid for the Blind","authors":"M. Rahimi, Yantao Shen, Cong Peng, Zhiming Liu, Fang Jiang","doi":"10.1109/icra46639.2022.9811610","DOIUrl":"https://doi.org/10.1109/icra46639.2022.9811610","url":null,"abstract":"In this paper, to achieve the goal of aiding the blind and visually impaired (BVI) to read any text not written in Braille, a custom-built, finger-wearable, and electro-tactile based Braille reading system with its Rapid Optical Character Recognition (R-OCR) method is developed. The R-OCR is capable of processing text information in real time using a miniature fish-eye imaging device mounted at the finger-wearable system. This allows real-time translation of printed text to electro-Braille along with natural movement of user's fingertip as if reading any Braille sign or book. An electro-tactile neuro-stimulation feedback mechanism is further proposed and incorporated with the reading system, which facilitates a new opto-electrotactile-feedback-based text line tracking control approach that enables text line following by user's fingertip during reading. Extensive experiments were designed and conducted to test the ability of blindfolded participants to read through and follow the printed text lines based on this optoelectrotactile-feedback method. The experimental results show that as the outcome of the opto-electrotactile-feedback, the users who involved in the feedback loop were able to maintain their fingertips within a $2mm$ distance of the text while “reading” through a printed text line. Our work is a significant step to aid the BVI users with a portable means to read any printed texts to Braille through the following and the translation, whether in the digital realm or physically, on any surface.","PeriodicalId":341244,"journal":{"name":"2022 International Conference on Robotics and Automation (ICRA)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126156345","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-05-23DOI: 10.1109/icra46639.2022.9812340
Yiming Qian, Hang Yan, Sachini Herath, Pyojin Kim, Yasutaka Furukawa
This paper proposes a novel motion estimation algorithm using WiFi networks and IMU sensor data in large uncontrolled environments, dubbed “WiFi Structure-from-Motion” (WiFi SfM). Given smartphone sensor data through day-to-day activities from a single user over a month, our WiFi SfM algorithm estimates smartphone motion tra-jectories and the structure of the environment represented as a WiFi radio map. The approach 1) establishes frame-to-frame correspondences based on WiFi fingerprints while exploiting our repetitive behavior patterns; 2) aligns trajectories via bundle adjustment; and 3) trains a self-supervised neural network to extract further motion constraints. We have col-lected 235 hours of smartphone data, spanning 38 days of daily activities in a university campus. Our experiments demonstrate the effectiveness of our approach over the competing methods with qualitative evaluations of the estimated motions and quantitative evaluations of indoor localization accuracy based on the reconstructed WiFi radio map. The WiFi SfM technology will potentially allow digital mapping companies to build better radio maps automatically by asking users to share WiFi/IMU sensor data in their daily activities.
{"title":"Single User WiFi Structure from Motion in the Wild","authors":"Yiming Qian, Hang Yan, Sachini Herath, Pyojin Kim, Yasutaka Furukawa","doi":"10.1109/icra46639.2022.9812340","DOIUrl":"https://doi.org/10.1109/icra46639.2022.9812340","url":null,"abstract":"This paper proposes a novel motion estimation algorithm using WiFi networks and IMU sensor data in large uncontrolled environments, dubbed “WiFi Structure-from-Motion” (WiFi SfM). Given smartphone sensor data through day-to-day activities from a single user over a month, our WiFi SfM algorithm estimates smartphone motion tra-jectories and the structure of the environment represented as a WiFi radio map. The approach 1) establishes frame-to-frame correspondences based on WiFi fingerprints while exploiting our repetitive behavior patterns; 2) aligns trajectories via bundle adjustment; and 3) trains a self-supervised neural network to extract further motion constraints. We have col-lected 235 hours of smartphone data, spanning 38 days of daily activities in a university campus. Our experiments demonstrate the effectiveness of our approach over the competing methods with qualitative evaluations of the estimated motions and quantitative evaluations of indoor localization accuracy based on the reconstructed WiFi radio map. The WiFi SfM technology will potentially allow digital mapping companies to build better radio maps automatically by asking users to share WiFi/IMU sensor data in their daily activities.","PeriodicalId":341244,"journal":{"name":"2022 International Conference on Robotics and Automation (ICRA)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128755758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-05-23DOI: 10.1109/icra46639.2022.9812211
Tamim Samman, Ayan Dutta, O. P. Kreidl, Swapnoneel Roy, Ladislau Bölöni
Multi-robot teams are becoming an increasingly popular approach for information gathering in large geographic areas, with applications in precision agriculture, surveying the aftermath of natural disasters or tracking pollution. These robot teams are often assembled from untrusted devices not owned by the user, making the maintenance of the integrity of the collected samples an important challenge. Furthermore, such robots often operate under conditions of opportunistic, or periodic connectivity and are limited in their energy budget and computational power. In this paper, we propose algorithms that build on blockchain technology to address the data integrity problem, but also take into account the limitations of the robots' resources and communication. We evaluate the proposed algorithms along the perspective of the tradeoffs between data integrity, model accuracy, and time consumption.
{"title":"Secure Multi-Robot Information Sampling with Periodic and Opportunistic Connectivity","authors":"Tamim Samman, Ayan Dutta, O. P. Kreidl, Swapnoneel Roy, Ladislau Bölöni","doi":"10.1109/icra46639.2022.9812211","DOIUrl":"https://doi.org/10.1109/icra46639.2022.9812211","url":null,"abstract":"Multi-robot teams are becoming an increasingly popular approach for information gathering in large geographic areas, with applications in precision agriculture, surveying the aftermath of natural disasters or tracking pollution. These robot teams are often assembled from untrusted devices not owned by the user, making the maintenance of the integrity of the collected samples an important challenge. Furthermore, such robots often operate under conditions of opportunistic, or periodic connectivity and are limited in their energy budget and computational power. In this paper, we propose algorithms that build on blockchain technology to address the data integrity problem, but also take into account the limitations of the robots' resources and communication. We evaluate the proposed algorithms along the perspective of the tradeoffs between data integrity, model accuracy, and time consumption.","PeriodicalId":341244,"journal":{"name":"2022 International Conference on Robotics and Automation (ICRA)","volume":"214 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128169529","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}