We present a novel approach for enhancing robotic exploration by using generative occupancy mapping. We implement SceneSense, a diffusion model designed and trained for predicting 3D occupancy maps given partial observations. Our proposed approach probabilistically fuses these predictions into a running occupancy map in real-time, resulting in significant improvements in map quality and traversability. We deploy SceneSense on a quadruped robot and validate its performance with real-world experiments to demonstrate the effectiveness of the model. In these experiments we show that occupancy maps enhanced with SceneSense predictions better estimate the distribution of our fully observed ground truth data (24.44% FID improvement around the robot and 75.59% improvement at range). We additionally show that integrating SceneSense enhanced maps into our robotic exploration stack as a “drop-in” map improvement, utilizing an existing off-the-shelf planner, results in improvements in robustness and traversability time. Finally, we show results of full exploration evaluations with our proposed system in two dissimilar environments and find that locally enhanced maps provide more consistent exploration results than maps constructed only from direct sensor measurements.
{"title":"Robust robotic exploration and mapping using generative occupancy map synthesis","authors":"Lorin Achey, Alec Reed, Brendan Crowe, Bradley Hayes, Christoffer Heckman","doi":"10.1007/s10514-025-10229-0","DOIUrl":"10.1007/s10514-025-10229-0","url":null,"abstract":"<div><p>We present a novel approach for enhancing robotic exploration by using generative occupancy mapping. We implement SceneSense, a diffusion model designed and trained for predicting 3D occupancy maps given partial observations. Our proposed approach probabilistically fuses these predictions into a running occupancy map in real-time, resulting in significant improvements in map quality and traversability. We deploy SceneSense on a quadruped robot and validate its performance with real-world experiments to demonstrate the effectiveness of the model. In these experiments we show that occupancy maps enhanced with SceneSense predictions better estimate the distribution of our fully observed ground truth data (24.44% FID improvement around the robot and 75.59% improvement at range). We additionally show that integrating SceneSense enhanced maps into our robotic exploration stack as a “drop-in” map improvement, utilizing an existing off-the-shelf planner, results in improvements in robustness and traversability time. Finally, we show results of full exploration evaluations with our proposed system in two dissimilar environments and find that locally enhanced maps provide more consistent exploration results than maps constructed only from direct sensor measurements.</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":"50 1","pages":""},"PeriodicalIF":4.3,"publicationDate":"2025-12-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10514-025-10229-0.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145886824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-29DOI: 10.1007/s10514-025-10234-3
Jan Bayer, Jan Faigl
In this paper, we address the problem of coordinating multiple robots to explore large-scale underground areas covered with low-bandwidth communication. Based on the evaluation of existing coordination methods, we found that well-performing methods rely on exchanging significant amounts of data, including maps. Such extensive data exchange becomes infeasible using only low-bandwidth communication, which is suitable for underground environments. Therefore, we propose a coordination method that satisfies low-bandwidth constraints by sharing only the robot’s positions. The proposed method employs a fully decentralized principle called Cross-rank that computes how to distribute robots uniformly at intersections and subsequently orders exploration waypoints based on the traveling salesman problem formulation. The proposed principle has been evaluated based on exploration time, traveled distance, and coverage in five large-scale simulated subterranean environments and a real-world deployment with three quadruped robots. The results suggest that the proposed approach provides a suitable tradeoff between the required communication bandwidth and the time needed for exploration.
{"title":"Decentralized multi-robot exploration under low-bandwidth communications","authors":"Jan Bayer, Jan Faigl","doi":"10.1007/s10514-025-10234-3","DOIUrl":"10.1007/s10514-025-10234-3","url":null,"abstract":"<div><p>In this paper, we address the problem of coordinating multiple robots to explore large-scale underground areas covered with low-bandwidth communication. Based on the evaluation of existing coordination methods, we found that well-performing methods rely on exchanging significant amounts of data, including maps. Such extensive data exchange becomes infeasible using only low-bandwidth communication, which is suitable for underground environments. Therefore, we propose a coordination method that satisfies low-bandwidth constraints by sharing only the robot’s positions. The proposed method employs a fully decentralized principle called Cross-rank that computes how to distribute robots uniformly at intersections and subsequently orders exploration waypoints based on the traveling salesman problem formulation. The proposed principle has been evaluated based on exploration time, traveled distance, and coverage in five large-scale simulated subterranean environments and a real-world deployment with three quadruped robots. The results suggest that the proposed approach provides a suitable tradeoff between the required communication bandwidth and the time needed for exploration.</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":"50 1","pages":""},"PeriodicalIF":4.3,"publicationDate":"2025-12-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10514-025-10234-3.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145886901","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-26DOI: 10.1007/s10514-025-10221-8
Matteo Luperto, Marco Maria Ferrara, Matteo Princisgh, Giacomo Boracchi, Francesco Amigoni
We present a novel method that, given a grid map of a partially explored indoor environment, estimates the amount of the explored area in the map and whether it is worth continuing to explore the uncovered part of the environment. Our method is based on the idea that modern deep learning models can successfully solve this task by leveraging visual clues in the map. Thus, we train a deep convolutional neural network on images depicting grid maps from partially explored environments, with annotations derived from the knowledge of the entire map, which is not available when the network is used for inference. We show that our network can be used to define a stopping criterion to successfully terminate the exploration process when this is expected to no longer add relevant details about the environment to the map, saving more than 35% of the total exploration time compared to covering the whole environment area.
{"title":"Estimating map completeness in robot exploration","authors":"Matteo Luperto, Marco Maria Ferrara, Matteo Princisgh, Giacomo Boracchi, Francesco Amigoni","doi":"10.1007/s10514-025-10221-8","DOIUrl":"10.1007/s10514-025-10221-8","url":null,"abstract":"<div><p>We present a novel method that, given a grid map of a partially explored indoor environment, estimates the amount of the explored area in the map and whether it is worth continuing to explore the uncovered part of the environment. Our method is based on the idea that modern deep learning models can successfully solve this task by leveraging visual clues in the map. Thus, we train a deep convolutional neural network on images depicting grid maps from partially explored environments, with annotations derived from the knowledge of the entire map, which is not available when the network is used for inference. We show that our network can be used to define a stopping criterion to successfully terminate the exploration process when this is expected to no longer add relevant details about the environment to the map, saving more than 35% of the total exploration time compared to covering the whole environment area.</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":"50 1","pages":""},"PeriodicalIF":4.3,"publicationDate":"2025-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10514-025-10221-8.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145831337","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-24DOI: 10.1007/s10514-025-10225-4
Patrick Zhong, Federico Rossi, Dylan A. Shell
An important class of robotic applications involves multiple agents cooperating to provide state observations to plan joint actions. We study planning under uncertainty when more than one participant must proactively plan perception and/or communication acts, and decide whether the cost to obtain a state estimate is justified by the benefits accrued by the information thus obtained. The approach we introduce is suitable for settings where observations are of high quality and they—either alone or along with communication—recover the system’s joint state, but the costs incurred mean this happens only infrequently. We formulate the problem as a type of Markov decision process (mdp) to be solved over macro-actions, sidestepping the construction of the full joint belief space, a well-known source of intractability. We then give a suitable Bellman-like recurrence that immediately suggests a means of solution. In their most general form, policies for these problems simultaneously describe (1) low-level actions to be taken, (2) stages when system-wide state is recovered, and (3) commitments to future rescheduling acts. The formulation expresses multi-agency in a variety of distinct practical forms, including: one party assisting by providing observations of, or reference points for, another; several agents communicating sensor information to fuse data and recover joint state; multiple agents coordinating activities to arrive at states that make joint state simultaneously observable to all individuals. Though solved in centralized form over joint states, the mdp is structured to allow decentralized execution, under some assumptions of synchrony in activities. After providing small-scale simulation studies of the general formulation, we discuss a specific scenario motivated by underwater gliders. We report on a physical robot implementation mocked-up to respect these same constraints, showing that joint plans are found and executed effectively by individual robots after appropriate projection. On the basis of our experience with hardware, we examine enhancements to the model that address nonidealities we have identified in practice, including the assumptions regarding synchrony.
{"title":"Planned synchronization for multi-robot systems with active observations","authors":"Patrick Zhong, Federico Rossi, Dylan A. Shell","doi":"10.1007/s10514-025-10225-4","DOIUrl":"10.1007/s10514-025-10225-4","url":null,"abstract":"<div><p>An important class of robotic applications involves multiple agents cooperating to provide state observations to plan joint actions. We study planning under uncertainty when more than one participant must proactively plan perception and/or communication acts, and decide whether the cost to obtain a state estimate is justified by the benefits accrued by the information thus obtained. The approach we introduce is suitable for settings where observations are of high quality and they—either alone or along with communication—recover the system’s joint state, but the costs incurred mean this happens only infrequently. We formulate the problem as a type of Markov decision process (<span>mdp</span>) to be solved over macro-actions, sidestepping the construction of the full joint belief space, a well-known source of intractability. We then give a suitable Bellman-like recurrence that immediately suggests a means of solution. In their most general form, policies for these problems simultaneously describe (1) low-level actions to be taken, (2) stages when system-wide state is recovered, and (3) commitments to future rescheduling acts. The formulation expresses multi-agency in a variety of distinct practical forms, including: one party assisting by providing observations of, or reference points for, another; several agents communicating sensor information to fuse data and recover joint state; multiple agents coordinating activities to arrive at states that make joint state simultaneously observable to all individuals. Though solved in centralized form over joint states, the <span>mdp</span> is structured to allow decentralized execution, under some assumptions of synchrony in activities. After providing small-scale simulation studies of the general formulation, we discuss a specific scenario motivated by underwater gliders. We report on a physical robot implementation mocked-up to respect these same constraints, showing that joint plans are found and executed effectively by individual robots after appropriate projection. On the basis of our experience with hardware, we examine enhancements to the model that address nonidealities we have identified in practice, including the assumptions regarding synchrony.</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":"50 1","pages":""},"PeriodicalIF":4.3,"publicationDate":"2025-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10514-025-10225-4.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145831467","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-02DOI: 10.1007/s10514-025-10223-6
Barbara Abonyi-Tóth, Ákos Nagy
In this paper, we present a novel method for autonomous robotic exploration using a car-like robot. The proposed method uses the frontiers in the map to build a tree representing the structure of the environment to aid the goal-selection method. An augmentation of the method is also proposed which is able to manage the loops present in the environment. In this case, the environment is represented with a graph structure. We compared the two proposed methods with seven state-of-the-art exploration methods in three simulated environments. The experiments show, that the proposed methods outperform the existing methods both in the time taken until full exploration and the distance traveled during the exploration, while offering a robust solution for autonomous robotic exploration without the need to tune several parameters to the unknown environment. The proposed exploration method was also tested using a real-life robot in an office scenario.
{"title":"A tree-based exploration method: utilizing the topology of the map as the basis of goal selection","authors":"Barbara Abonyi-Tóth, Ákos Nagy","doi":"10.1007/s10514-025-10223-6","DOIUrl":"10.1007/s10514-025-10223-6","url":null,"abstract":"<div><p>In this paper, we present a novel method for autonomous robotic exploration using a car-like robot. The proposed method uses the frontiers in the map to build a tree representing the structure of the environment to aid the goal-selection method. An augmentation of the method is also proposed which is able to manage the loops present in the environment. In this case, the environment is represented with a graph structure. We compared the two proposed methods with seven state-of-the-art exploration methods in three simulated environments. The experiments show, that the proposed methods outperform the existing methods both in the time taken until full exploration and the distance traveled during the exploration, while offering a robust solution for autonomous robotic exploration without the need to tune several parameters to the unknown environment. The proposed exploration method was also tested using a real-life robot in an office scenario.</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":"50 1","pages":""},"PeriodicalIF":4.3,"publicationDate":"2025-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145675500","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-28DOI: 10.1007/s10514-025-10218-3
Mingi Jeong, Cristian Molinaro, Tonmoay Deb, Youzhi Zhang, Andrea Pugliese, Eugene Santos Jr., V. S. Subrahmanian, Alberto Quattrini Li
This paper addresses the problem of both actively searching and tracking multiple unknown dynamic objects in a known environment with multiple cooperative autonomous agents with partial observability. The tracking of a target ends when the uncertainty is below a specified threshold. Current methods typically assume homogeneous agents without access to external information and utilize short-horizon target predictive models. Such assumptions limit real-world applications. We propose a fully integrated pipeline where the main novel contributions are: (1) a time-varying weighted belief representation capable of handling knowledge that changes over time, which includes external reports of varying levels of trustworthiness in addition to the agents involved; (2) the integration of a Long Short Term Memory-based trajectory prediction within the optimization framework for long-horizon decision-making, which accounts for trajectory prediction in time-configuration space, thus increasing responsiveness; and (3) a comprehensive system that accounts for multiple agents and enables information-driven optimization during both the search and track tasks. When communication is available, our proposed strategy consolidates exploration results collected asynchronously by agents and external sources into a headquarters, who can allocate each agent to maximize the overall team’s utility, effectively using all available information. We tested our approach extensively in Monte Carlo simulations against baselines, representative of classes of approaches from the literature, and in robustness and ablation studies. In addition, we performed experiments in a 3D physics based engine robot simulator to test the applicability in the real world, as well as with real-world trajectories obtained from an oceanography computational fluid dynamics simulator. Results show the effectiveness of our proposed method, which achieves mission completion times that are 1.3 to 3.2 times faster in finding all targets, in most scenarios, including challenging ones, where the number of targets is 5 times greater than that of the agents.
{"title":"Multi-object active search and tracking by multiple agents in untrusted, dynamically changing environments","authors":"Mingi Jeong, Cristian Molinaro, Tonmoay Deb, Youzhi Zhang, Andrea Pugliese, Eugene Santos Jr., V. S. Subrahmanian, Alberto Quattrini Li","doi":"10.1007/s10514-025-10218-3","DOIUrl":"10.1007/s10514-025-10218-3","url":null,"abstract":"<div><p>This paper addresses the problem of both actively searching and tracking multiple unknown dynamic objects in a known environment with multiple cooperative autonomous agents with partial observability. The tracking of a target ends when the uncertainty is below a specified threshold. Current methods typically assume homogeneous agents without access to external information and utilize short-horizon target predictive models. Such assumptions limit real-world applications. We propose a fully integrated pipeline where the main novel contributions are: (1) a time-varying weighted belief representation capable of handling knowledge that changes over time, which includes external reports of varying levels of trustworthiness in addition to the agents involved; (2) the integration of a Long Short Term Memory-based trajectory prediction within the optimization framework for long-horizon decision-making, which accounts for trajectory prediction in time-configuration space, thus increasing responsiveness; and (3) a comprehensive system that accounts for multiple agents and enables information-driven optimization during both the search and track tasks. When communication is available, our proposed strategy consolidates exploration results collected asynchronously by agents and external sources into a headquarters, who can allocate each agent to maximize the overall team’s utility, effectively using all available information. We tested our approach extensively in Monte Carlo simulations against baselines, representative of classes of approaches from the literature, and in robustness and ablation studies. In addition, we performed experiments in a 3D physics based engine robot simulator to test the applicability in the real world, as well as with real-world trajectories obtained from an oceanography computational fluid dynamics simulator. Results show the effectiveness of our proposed method, which achieves mission completion times that are 1.3 to 3.2 times faster in finding all targets, in most scenarios, including challenging ones, where the number of targets is 5 times greater than that of the agents.</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":"50 1","pages":""},"PeriodicalIF":4.3,"publicationDate":"2025-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145612909","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-28DOI: 10.1007/s10514-025-10227-2
Luke Robinson, Matthew Gadd, Paul Newman, Daniele De Martini
This paper proposes a novel system to conduct visual servoing of a mobile robot using multiple uncalibrated, wall-mounted cameras. Specifically, we utilise a constellation of such sensors to cover a wide area by allowing robot control to be passed between cameras in regions where their fields of view overlap. This method, in conjunction with the fact that all computing is also executed offboard, allows for simpler and cheaper robots to be deployed in controlled and finite spaces. Our method simplifies the natural installation cycle of a newly deployed camera network, eliminating the need for explicit camera positioning or orientation, both globally (relative to a building plan) and locally (among viewpoints). Our system memorises pixel-wise topological connections between viewpoints by leveraging natural human exploration of the environment. We detect graph edges through simultaneous detections of the same person across different cameras, allowing us to automatically construct an evolving graph that represents overlapping fields of view within the camera network. In combination with a hybrid-A*-based planner, our approach allows efficient planning and control of robots across a wide area by traversing cameras between areas of overlap. We validate our approach through autonomous traversals in a productive office environment, using a network of six cameras, and compare our performance against both human teleoperation and a traditional Simultaneous Localisation and Mapping (SLAM) approach.
{"title":"Robot-relay: building-wide, calibration-less visual servoing with learned sensor handover networks","authors":"Luke Robinson, Matthew Gadd, Paul Newman, Daniele De Martini","doi":"10.1007/s10514-025-10227-2","DOIUrl":"10.1007/s10514-025-10227-2","url":null,"abstract":"<div><p>This paper proposes a novel system to conduct visual servoing of a mobile robot using multiple uncalibrated, wall-mounted cameras. Specifically, we utilise a constellation of such sensors to cover a wide area by allowing robot control to be passed between cameras in regions where their fields of view overlap. This method, in conjunction with the fact that all computing is also executed offboard, allows for simpler and cheaper robots to be deployed in controlled and finite spaces. Our method simplifies the natural installation cycle of a newly deployed camera network, eliminating the need for explicit camera positioning or orientation, both globally (relative to a building plan) and locally (among viewpoints). Our system memorises pixel-wise topological connections between viewpoints by leveraging natural human exploration of the environment. We detect graph edges through simultaneous detections of the same person across different cameras, allowing us to automatically construct an evolving graph that represents overlapping fields of view within the camera network. In combination with a hybrid-A*-based planner, our approach allows efficient planning and control of robots across a wide area by traversing cameras between areas of overlap. We validate our approach through autonomous traversals in a productive office environment, using a network of six cameras, and compare our performance against both human teleoperation and a traditional Simultaneous Localisation and Mapping (SLAM) approach.</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":"50 1","pages":""},"PeriodicalIF":4.3,"publicationDate":"2025-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10514-025-10227-2.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145612907","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-28DOI: 10.1007/s10514-025-10231-6
Thales C. Silva, Xi Yu, M. Ani Hsieh
Multi-robot systems are broadly used in applications such as search and rescue, environmental monitoring, and mapping of unknown environments. Effective coordination among these robots often relies on distributed information and local decision-making. However, maintaining constant communication links between robots can be challenging due to environmental and task constraints. Robots can move around to seek temporal communication links that over time jointly establish the intermittent connectivity of the network. This paper aims to incorporate temporal communication constraints into the path planning for multi-robot teams with stochastic motion and handling complex tasks specified in a temporal order. We use formal methods to model the temporal specification of tasks. Task assignments and high-level communication requirements are provided to individual robots on a multi-robot team as independent temporal logic expressions. Robots update their plans for future communication events according to their local decision-making algorithms and jointly synthesize a bottom-up policy to meet the communication requirements. We provide a strategy to maintain intermittent connectivity while satisfying a risk constraint. In addition, we systematically analyze the impact of different rendezvous selection strategies, comparing cost functions that minimize the total traveled distance, balance distances among robots, or incorporate risk awareness. Our simulation results suggest that the proposed method effectively accommodates diverse operational preferences, enhancing flexibility, robustness, and overall mission performance.
{"title":"Probabilistic multi-robot planning with temporal tasks and communication constraints","authors":"Thales C. Silva, Xi Yu, M. Ani Hsieh","doi":"10.1007/s10514-025-10231-6","DOIUrl":"10.1007/s10514-025-10231-6","url":null,"abstract":"<div><p>Multi-robot systems are broadly used in applications such as search and rescue, environmental monitoring, and mapping of unknown environments. Effective coordination among these robots often relies on distributed information and local decision-making. However, maintaining constant communication links between robots can be challenging due to environmental and task constraints. Robots can move around to seek temporal communication links that over time jointly establish the intermittent connectivity of the network. This paper aims to incorporate temporal communication constraints into the path planning for multi-robot teams with stochastic motion and handling complex tasks specified in a temporal order. We use formal methods to model the temporal specification of tasks. Task assignments and high-level communication requirements are provided to individual robots on a multi-robot team as independent temporal logic expressions. Robots update their plans for future communication events according to their local decision-making algorithms and jointly synthesize a bottom-up policy to meet the communication requirements. We provide a strategy to maintain intermittent connectivity while satisfying a risk constraint. In addition, we systematically analyze the impact of different rendezvous selection strategies, comparing cost functions that minimize the total traveled distance, balance distances among robots, or incorporate risk awareness. Our simulation results suggest that the proposed method effectively accommodates diverse operational preferences, enhancing flexibility, robustness, and overall mission performance.</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":"50 1","pages":""},"PeriodicalIF":4.3,"publicationDate":"2025-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145612910","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-22DOI: 10.1007/s10514-025-10228-1
Simon Jones, Sabine Hauert
Building a distributed spatial awareness within a swarm of locally sensing and communicating robots enables new swarm algorithms. We use local observations by robots of each other and Gaussian belief propagation message passing combined with continuous swarm movement to build a global and distributed swarm-centric frame of reference. With low bandwidth and computation requirements, this shared reference frame allows new swarm algorithms. We characterise the system in simulation and demonstrate two example algorithms, then demonstrate reliable performance on real robots with imperfect sensing.
{"title":"Distributed spatial awareness for robot swarms","authors":"Simon Jones, Sabine Hauert","doi":"10.1007/s10514-025-10228-1","DOIUrl":"10.1007/s10514-025-10228-1","url":null,"abstract":"<div><p>Building a distributed spatial awareness within a swarm of locally sensing and communicating robots enables new swarm algorithms. We use local observations by robots of each other and Gaussian belief propagation message passing combined with continuous swarm movement to build a global and distributed swarm-centric frame of reference. With low bandwidth and computation requirements, this shared reference frame allows new swarm algorithms. We characterise the system in simulation and demonstrate two example algorithms, then demonstrate reliable performance on real robots with imperfect sensing.</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":"49 4","pages":""},"PeriodicalIF":4.3,"publicationDate":"2025-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10514-025-10228-1.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145561537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-21DOI: 10.1007/s10514-025-10222-7
Luca Morando, Xingyuan Zhou, Farokh Atashzar, Giuseppe Loianno
Aerial Robots have the potential to play a crucial role in assisting humans in complex and dangerous tasks with the goal to decrease users’ cognitive and physical workload. In addition, many applications will require aerial robots to be ubiquitous and share the same environment with human operators. Therefore, this calls for novel solutions to enable seamless, transparent, and efficient human-drone collaboration and co-working in the same workspace. In this paper, we present a novel tele-immersive approach that promotes cognitive and physical collaboration between humans and robots through Mixed Reality (MR). We develop a bi-directional spatial awareness module and a new virtual-physical interaction approach integrated on a head-mounted display with MR. Furthermore, we design two alternative methods for spatial and physical interaction. Both solutions use a 2D monitor for spatial representation, with one method involving a mouse and keyboard, and the other using a haptic interface with the new VAC solution for physical interaction. This setup allows us to study how to analyze how different physical embodiments might compensate for reduced spatial representation during interaction tasks. Finally, to validate our approach and our comparative study, we propose a comprehensive user case study where 12 subjects with different expertise and background are tasked to complete a target reaching task in an indoor cluttered environment. We consider as part of the evaluation both subjective metrics, such as the System Usability Scale and the NASA Task Load Index, as well as objective measures, including completion time, distance traveled to reach the goal, and smoothness of movements. The results demonstrate enhanced user interaction and control capabilities during the task when using our novel novel tele-immersive approach with MR compared to the two alternative solutions. Additionally, the experiments show the opportunity to employ the proposed system as a new innovative collaboration approach between a non-expert human and an aerial robot for exploration and inspection tasks in unknown environments. Video: https://youtu.be/q8Dq-cNxcig
{"title":"Human-drone collaboration via mixed-reality for efficient navigation and interaction in constrained environments: a comprehensive user case study","authors":"Luca Morando, Xingyuan Zhou, Farokh Atashzar, Giuseppe Loianno","doi":"10.1007/s10514-025-10222-7","DOIUrl":"10.1007/s10514-025-10222-7","url":null,"abstract":"<div><p>Aerial Robots have the potential to play a crucial role in assisting humans in complex and dangerous tasks with the goal to decrease users’ cognitive and physical workload. In addition, many applications will require aerial robots to be ubiquitous and share the same environment with human operators. Therefore, this calls for novel solutions to enable seamless, transparent, and efficient human-drone collaboration and co-working in the same workspace. In this paper, we present a novel tele-immersive approach that promotes cognitive and physical collaboration between humans and robots through Mixed Reality (MR). We develop a bi-directional spatial awareness module and a new virtual-physical interaction approach integrated on a head-mounted display with MR. Furthermore, we design two alternative methods for spatial and physical interaction. Both solutions use a 2D monitor for spatial representation, with one method involving a mouse and keyboard, and the other using a haptic interface with the new VAC solution for physical interaction. This setup allows us to study how to analyze how different physical embodiments might compensate for reduced spatial representation during interaction tasks. Finally, to validate our approach and our comparative study, we propose a comprehensive user case study where 12 subjects with different expertise and background are tasked to complete a target reaching task in an indoor cluttered environment. We consider as part of the evaluation both subjective metrics, such as the System Usability Scale and the NASA Task Load Index, as well as objective measures, including completion time, distance traveled to reach the goal, and smoothness of movements. The results demonstrate enhanced user interaction and control capabilities during the task when using our novel novel tele-immersive approach with MR compared to the two alternative solutions. Additionally, the experiments show the opportunity to employ the proposed system as a new innovative collaboration approach between a non-expert human and an aerial robot for exploration and inspection tasks in unknown environments. Video: https://youtu.be/q8Dq-cNxcig</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":"49 4","pages":""},"PeriodicalIF":4.3,"publicationDate":"2025-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145561603","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}