Pub Date : 2024-03-20DOI: 10.1007/s10846-024-02087-2
Abstract
Today, automated techniques for the update of as-built Building Information Models (BIM) make use of offline algorithms restricting the update frequency to an extent where continuous monitoring becomes nearly impossible. To address this problem, we propose a new method for robotic monitoring that updates an as-built BIM in real-time by solving a Simultaneous Localization and Mapping (SLAM) problem where the map is represented as a collection of elements from the as-planned BIM. The suggested approach is based on the Rao-Blackwellized Particle Filter (RBPF) which enables explicit injection of prior knowledge from the building’s construction schedule, i.e., from a 4D BIM, or its elements’ spatial relations. In the methods section we describe the benefits of using an exact inverse sensor model that provides a measure for the existence probability of elements while considering the entire probabilistic existence belief map. We continue by outlining robustification techniques that include both geometrical and temporal dimensions and present how we account for common pose and shape mistakes in constructed elements. Additionally, we show that our method reduces to the standard Monte Carlo Localization (MCL) in known areas. We conclude by presenting simulation results of the proposed method and comparing it to adjacent alternatives.
{"title":"Online as-Built Building Information Model Update for Robotic Monitoring in Construction Sites","authors":"","doi":"10.1007/s10846-024-02087-2","DOIUrl":"https://doi.org/10.1007/s10846-024-02087-2","url":null,"abstract":"<h3>Abstract</h3> <p>Today, automated techniques for the update of as-built Building Information Models (BIM) make use of offline algorithms restricting the update frequency to an extent where continuous monitoring becomes nearly impossible. To address this problem, we propose a new method for robotic monitoring that updates an as-built BIM in real-time by solving a Simultaneous Localization and Mapping (SLAM) problem where the map is represented as a collection of elements from the as-planned BIM. The suggested approach is based on the Rao-Blackwellized Particle Filter (RBPF) which enables explicit injection of prior knowledge from the building’s construction schedule, i.e., from a 4D BIM, or its elements’ spatial relations. In the methods section we describe the benefits of using an exact inverse sensor model that provides a measure for the existence probability of elements while considering the entire probabilistic existence belief map. We continue by outlining robustification techniques that include both geometrical and temporal dimensions and present how we account for common pose and shape mistakes in constructed elements. Additionally, we show that our method reduces to the standard Monte Carlo Localization (MCL) in known areas. We conclude by presenting simulation results of the proposed method and comparing it to adjacent alternatives.</p>","PeriodicalId":54794,"journal":{"name":"Journal of Intelligent & Robotic Systems","volume":"103 1","pages":""},"PeriodicalIF":3.3,"publicationDate":"2024-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140204072","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-18DOI: 10.1007/s10846-024-02072-9
Ce Liu
Based on hierarchical inner-outer loop strategy, the tracking control for the helicopter system could be designed individually for the position loop and for the attitude loop, thus simplifying the underactuated control problem. However, due to the nonlinear coupling between the position dynamics and rotation dynamics, the performance of the position control is affected by attitude errors, especially when the attitude control can not tracks the reference attitude instantaneously. This work provides a hierarchical trajectory tracking control design for the helicopter with model uncertainties, ensuring the stability of the overall system considering the perturbation caused by attitude tracking errors and the nonlinear coupling. The attitude of the helicopter is descried by unit-quaternion, for which anti-unwinding control design is presented. Besides, the criteria for avoidance of singularity in generation of the reference attitude is derived. Simulation results demonstrate the effectiveness of the design.
{"title":"Nonsingular Hierarchical Approach for Trajectory Tracking Control of Miniature Helicopter with Model Uncertainties","authors":"Ce Liu","doi":"10.1007/s10846-024-02072-9","DOIUrl":"https://doi.org/10.1007/s10846-024-02072-9","url":null,"abstract":"<p>Based on hierarchical inner-outer loop strategy, the tracking control for the helicopter system could be designed individually for the position loop and for the attitude loop, thus simplifying the underactuated control problem. However, due to the nonlinear coupling between the position dynamics and rotation dynamics, the performance of the position control is affected by attitude errors, especially when the attitude control can not tracks the reference attitude instantaneously. This work provides a hierarchical trajectory tracking control design for the helicopter with model uncertainties, ensuring the stability of the overall system considering the perturbation caused by attitude tracking errors and the nonlinear coupling. The attitude of the helicopter is descried by unit-quaternion, for which anti-unwinding control design is presented. Besides, the criteria for avoidance of singularity in generation of the reference attitude is derived. Simulation results demonstrate the effectiveness of the design.</p>","PeriodicalId":54794,"journal":{"name":"Journal of Intelligent & Robotic Systems","volume":"196 1","pages":""},"PeriodicalIF":3.3,"publicationDate":"2024-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140166996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-18DOI: 10.1007/s10846-024-02056-9
Samar Abbas Nawaz
Civilian drones are becoming more functionally independent from human involvement which sets them on a path towards “autonomous” status. When defining “autonomy,” the European Union (EU) regulations, among other jurisdictions, employ an all-or-nothing approach, according to which a drone is either able to operate fully autonomously or not at all. This dichotomous approach disregards the various levels of drone autonomy and fails to capture the complexity of civilian drone operation. Within the EU, this has regulatory implications, such as regulatory lag, hindrance in better safety regulation, and incoherence with the Union’s regulatory approach towards Artificial Intelligence (AI). This article argues that understanding autonomy as a spectrum, rather than in a dichotomous way, would be more coherent with the technical functioning of drone and would avoid potential regulatory problems caused by the current dichotomous approach. In delineating this spectral approach, this article (1) analyses manifestations of autonomy in drone operations, (2) delineates efforts in the technical literatures and drone standardization to conceptualize “autonomy”, and (3) explores definitional attempts for autonomy made in three other technologies: self-driving cars, autonomous weapon systems, and autonomous maritime ships.
{"title":"Regulating Autonomy in Civilian Drones: Towards a Spectral Approach","authors":"Samar Abbas Nawaz","doi":"10.1007/s10846-024-02056-9","DOIUrl":"https://doi.org/10.1007/s10846-024-02056-9","url":null,"abstract":"<p>Civilian drones are becoming more functionally independent from human involvement which sets them on a path towards “autonomous” status. When defining “autonomy,” the European Union (EU) regulations, among other jurisdictions, employ an all-or-nothing approach, according to which a drone is either able to operate fully autonomously or not at all. This dichotomous approach disregards the various levels of drone autonomy and fails to capture the complexity of civilian drone operation. Within the EU, this has regulatory implications, such as regulatory lag, hindrance in better safety regulation, and incoherence with the Union’s regulatory approach towards Artificial Intelligence (AI). This article argues that understanding autonomy as a spectrum, rather than in a dichotomous way, would be more coherent with the technical functioning of drone and would avoid potential regulatory problems caused by the current dichotomous approach. In delineating this spectral approach, this article (1) analyses manifestations of autonomy in drone operations, (2) delineates efforts in the technical literatures and drone standardization to conceptualize “autonomy”, and (3) explores definitional attempts for autonomy made in three other technologies: self-driving cars, autonomous weapon systems, and autonomous maritime ships.</p>","PeriodicalId":54794,"journal":{"name":"Journal of Intelligent & Robotic Systems","volume":"21 1","pages":""},"PeriodicalIF":3.3,"publicationDate":"2024-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140166856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Effective interactions between humans and robots are vital to achieving shared tasks in collaborative processes. Robots can utilize diverse communication channels to interact with humans, such as hearing, speech, sight, touch, and learning. Our focus, amidst the various means of interactions between humans and robots, is on three emerging frontiers that significantly impact the future directions of human–robot interaction (HRI): (i) human–robot collaboration inspired by human–human collaboration, (ii) brain-computer interfaces, and (iii) emotional intelligent perception. First, we explore advanced techniques for human–robot collaboration, covering a range of methods from compliance and performance-based approaches to synergistic and learning-based strategies, including learning from demonstration, active learning, and learning from complex tasks. Then, we examine innovative uses of brain-computer interfaces for enhancing HRI, with a focus on applications in rehabilitation, communication, brain state and emotion recognition. Finally, we investigate the emotional intelligence in robotics, focusing on translating human emotions to robots via facial expressions, body gestures, and eye-tracking for fluid, natural interactions. Recent developments in these emerging frontiers and their impact on HRI were detailed and discussed. We highlight contemporary trends and emerging advancements in the field. Ultimately, this paper underscores the necessity of a multimodal approach in developing systems capable of adaptive behavior and effective interaction between humans and robots, thus offering a thorough understanding of the diverse modalities essential for maximizing the potential of HRI.
{"title":"Emerging Frontiers in Human–Robot Interaction","authors":"Farshad Safavi, Parthan Olikkal, Dingyi Pei, Sadia Kamal, Helen Meyerson, Varsha Penumalee, Ramana Vinjamuri","doi":"10.1007/s10846-024-02074-7","DOIUrl":"https://doi.org/10.1007/s10846-024-02074-7","url":null,"abstract":"<p>Effective interactions between humans and robots are vital to achieving shared tasks in collaborative processes. Robots can utilize diverse communication channels to interact with humans, such as hearing, speech, sight, touch, and learning. Our focus, amidst the various means of interactions between humans and robots, is on three emerging frontiers that significantly impact the future directions of human–robot interaction (HRI): (i) human–robot collaboration inspired by human–human collaboration, (ii) brain-computer interfaces, and (iii) emotional intelligent perception. First, we explore advanced techniques for human–robot collaboration, covering a range of methods from compliance and performance-based approaches to synergistic and learning-based strategies, including learning from demonstration, active learning, and learning from complex tasks. Then, we examine innovative uses of brain-computer interfaces for enhancing HRI, with a focus on applications in rehabilitation, communication, brain state and emotion recognition. Finally, we investigate the emotional intelligence in robotics, focusing on translating human emotions to robots via facial expressions, body gestures, and eye-tracking for fluid, natural interactions. Recent developments in these emerging frontiers and their impact on HRI were detailed and discussed. We highlight contemporary trends and emerging advancements in the field. Ultimately, this paper underscores the necessity of a multimodal approach in developing systems capable of adaptive behavior and effective interaction between humans and robots, thus offering a thorough understanding of the diverse modalities essential for maximizing the potential of HRI.</p>","PeriodicalId":54794,"journal":{"name":"Journal of Intelligent & Robotic Systems","volume":"27 1","pages":""},"PeriodicalIF":3.3,"publicationDate":"2024-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140166871","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-18DOI: 10.1007/s10846-024-02067-6
Abstract
With the progress of science and technology, the traditional robot workplace is fixed, single-function, and inflexible, and may not work properly in some special places, while the modular robot with self-reconfiguration function is a robot that can adapt to new environments and can rely on new task settings, which has a series of universal modules and relies on mutual communication between modules and autonomous reorganization movements to cope with changes in the environment or tasks and recover from the state of destruction. This paper summarizes the representative international research results from the perspective of the hardware design of robots in two aspects based on the design characteristics of self-reconfiguring modular robots around the reconfiguration strategy planning method. At the same time, some existing problems and shortcomings are pointed out on this basis to provide ideas as well as perspectives for future research development.
{"title":"Research on Reconfiguration Strategies for Self-reconfiguring Modular Robots: A Review","authors":"","doi":"10.1007/s10846-024-02067-6","DOIUrl":"https://doi.org/10.1007/s10846-024-02067-6","url":null,"abstract":"<h3>Abstract</h3> <p>With the progress of science and technology, the traditional robot workplace is fixed, single-function, and inflexible, and may not work properly in some special places, while the modular robot with self-reconfiguration function is a robot that can adapt to new environments and can rely on new task settings, which has a series of universal modules and relies on mutual communication between modules and autonomous reorganization movements to cope with changes in the environment or tasks and recover from the state of destruction. This paper summarizes the representative international research results from the perspective of the hardware design of robots in two aspects based on the design characteristics of self-reconfiguring modular robots around the reconfiguration strategy planning method. At the same time, some existing problems and shortcomings are pointed out on this basis to provide ideas as well as perspectives for future research development.</p>","PeriodicalId":54794,"journal":{"name":"Journal of Intelligent & Robotic Systems","volume":"21 1","pages":""},"PeriodicalIF":3.3,"publicationDate":"2024-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140166872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-18DOI: 10.1007/s10846-024-02082-7
Almir de O. Costa Junior, Elloá B. Guedes, João Paulo F. Lima e Silva, José Anglada Rivera
Computational Thinking has been recognized as an essential skill to be developed in individuals of the 21st Century. Various initiatives worldwide have been proposed to establish the most effective educational strategies and resources to support the development of these skills. With the publication of the Standards for Computing in Basic Education in Brazil (Complement to the National Base Common Curricular), Computer Science is expected to be taught as a fundamental science from Early Childhood Education to High School. In this context, this study presents the results of the students’ learning and the usability evaluation of the ThinkCarpet: an interactive educational robotics artifact built using alternative materials and Arduino, with the purpose of aiding in the development of the concept of algorithms in students from Middle School. Regarding the students’ learning, an average of 93.75% of valid solutions was observed for the algorithms validated through the use of the ThinkCarpet. In contrast, only 62% of valid solutions were identified in activities outside the proposed resource. As for the results of the application of the System Usability Scale (SUS), the results show a score of 83.59, which classifies the ThinkCarpet as excellent in a realistic scenario.
{"title":"Developing Computational Thinking in Middle School with an Educational Robotics Resource","authors":"Almir de O. Costa Junior, Elloá B. Guedes, João Paulo F. Lima e Silva, José Anglada Rivera","doi":"10.1007/s10846-024-02082-7","DOIUrl":"https://doi.org/10.1007/s10846-024-02082-7","url":null,"abstract":"<p>Computational Thinking has been recognized as an essential skill to be developed in individuals of the 21st Century. Various initiatives worldwide have been proposed to establish the most effective educational strategies and resources to support the development of these skills. With the publication of the Standards for Computing in Basic Education in Brazil (Complement to the National Base Common Curricular), Computer Science is expected to be taught as a fundamental science from Early Childhood Education to High School. In this context, this study presents the results of the students’ learning and the usability evaluation of the ThinkCarpet: an interactive educational robotics artifact built using alternative materials and Arduino, with the purpose of aiding in the development of the concept of algorithms in students from Middle School. Regarding the students’ learning, an average of 93.75% of valid solutions was observed for the algorithms validated through the use of the ThinkCarpet. In contrast, only 62% of valid solutions were identified in activities outside the proposed resource. As for the results of the application of the System Usability Scale (SUS), the results show a score of 83.59, which classifies the ThinkCarpet as excellent in a realistic scenario.</p>","PeriodicalId":54794,"journal":{"name":"Journal of Intelligent & Robotic Systems","volume":"23 1","pages":""},"PeriodicalIF":3.3,"publicationDate":"2024-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140166865","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-11DOI: 10.1007/s10846-024-02076-5
Abstract
A novel communication time-delay classification-based method is designed for nonlinear multiagent systems with the finite-time prescribed performance function. The time-delay phenomenon for communication channels between agents is discussed. Then, an improved time-delay classification method is proposed to broaden the standard of classification mechanism by considering the degree of deviation and relative variation of neighbor agents, rather than classifying the delay time into large time-delay and small time-delay. Based on this, the unified Lyapunov-Krasovskii functional and the finite-time performance function are used to solve the large time-delay phenomenon and ensure that the error is within the preset boundary, respectively. Furthermore, a modified switching event-triggered strategy is put forward to reduce the transmission burden, which considers the impact of tracking error to adjust the threshold condition in real-time. Additionally, all signals of the closed-loop systems are bounded. Eventually, two simulation examples verify the validity of the control strategy.
{"title":"A Novel Communication Time-Delay Cooperative Control Method with Switching Event-Triggered Strategy","authors":"","doi":"10.1007/s10846-024-02076-5","DOIUrl":"https://doi.org/10.1007/s10846-024-02076-5","url":null,"abstract":"<h3>Abstract</h3> <p>A novel communication time-delay classification-based method is designed for nonlinear multiagent systems with the finite-time prescribed performance function. The time-delay phenomenon for communication channels between agents is discussed. Then, an improved time-delay classification method is proposed to broaden the standard of classification mechanism by considering the degree of deviation and relative variation of neighbor agents, rather than classifying the delay time into large time-delay and small time-delay. Based on this, the unified Lyapunov-Krasovskii functional and the finite-time performance function are used to solve the large time-delay phenomenon and ensure that the error is within the preset boundary, respectively. Furthermore, a modified switching event-triggered strategy is put forward to reduce the transmission burden, which considers the impact of tracking error to adjust the threshold condition in real-time. Additionally, all signals of the closed-loop systems are bounded. Eventually, two simulation examples verify the validity of the control strategy.</p>","PeriodicalId":54794,"journal":{"name":"Journal of Intelligent & Robotic Systems","volume":"34 1","pages":""},"PeriodicalIF":3.3,"publicationDate":"2024-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140097941","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-09DOI: 10.1007/s10846-024-02077-4
Xuhui Zhao, Zhi Gao, Hao Li, Hong Ji, Hong Yang, Chenyang Li, Hao Fang, Ben M. Chen
Despite promising SLAM research in both vision and robotics communities, which fundamentally sustains the autonomy of intelligent unmanned systems, visual challenges still threaten its robust operation severely. Existing SLAM methods usually focus on specific challenges and solve the problem with sophisticated enhancement or multi-modal fusion. However, they are basically limited to particular scenes with a non-quantitative understanding and awareness of challenges, resulting in a significant performance decline with poor generalization and(or) redundant computation with inflexible mechanisms. To push the frontier of visual SLAM, we propose a fully computational reliable evaluation module called CEMS (Challenge Evaluation Module for SLAM) for general visual perception based on a clear definition and systematic analysis. It decomposes various challenges into several common aspects and evaluates degradation with corresponding indicators. Extensive experiments demonstrate our feasibility and outperformance. The proposed module has a high consistency of 88.298% compared with annotation ground truth, and a strong correlation of 0.879 compared with SLAM tracking performance. Moreover, we show the prototype SLAM based on CEMS with better performance and the first comprehensive CET (Challenge Evaluation Table) for common SLAM datasets (EuRoC, KITTI, etc.) with objective and fair evaluations of various challenges. We make it available online to benefit the community on our website.
尽管视觉和机器人学界的 SLAM 研究前景广阔,从根本上维护了智能无人系统的自主性,但视觉挑战仍严重威胁着其稳健运行。现有的 SLAM 方法通常侧重于特定的挑战,并通过复杂的增强或多模态融合来解决问题。然而,这些方法基本上局限于特定场景,对挑战的理解和认识不够定量,导致性能大幅下降,泛化能力差,(或)计算冗余,机制不灵活。为了推动视觉 SLAM 的前沿发展,我们提出了一个完全可计算的可靠评估模块,称为 CEMS(Challenge Evaluation Module for SLAM),用于基于明确定义和系统分析的一般视觉感知。它将各种挑战分解为几个共同方面,并用相应的指标评估退化情况。广泛的实验证明了我们的可行性和优越性能。所提出的模块与标注地面实况的一致性高达 88.298%,与 SLAM 跟踪性能的相关性高达 0.879。此外,我们还展示了性能更佳的基于 CEMS 的 SLAM 原型,以及第一份针对常见 SLAM 数据集(EuRoC、KITTI 等)的全面 CET(挑战评估表),其中对各种挑战进行了客观公正的评估。我们将在自己的网站上提供该表,以造福社区。
{"title":"How Challenging is a Challenge? CEMS: a Challenge Evaluation Module for SLAM Visual Perception","authors":"Xuhui Zhao, Zhi Gao, Hao Li, Hong Ji, Hong Yang, Chenyang Li, Hao Fang, Ben M. Chen","doi":"10.1007/s10846-024-02077-4","DOIUrl":"https://doi.org/10.1007/s10846-024-02077-4","url":null,"abstract":"<p>Despite promising SLAM research in both vision and robotics communities, which fundamentally sustains the autonomy of intelligent unmanned systems, visual challenges still threaten its robust operation severely. Existing SLAM methods usually focus on specific challenges and solve the problem with sophisticated enhancement or multi-modal fusion. However, they are basically limited to particular scenes with a non-quantitative understanding and awareness of challenges, resulting in a significant performance decline with poor generalization and(or) redundant computation with inflexible mechanisms. To push the frontier of visual SLAM, we propose a fully computational reliable evaluation module called CEMS (Challenge Evaluation Module for SLAM) for general visual perception based on a clear definition and systematic analysis. It decomposes various challenges into several common aspects and evaluates degradation with corresponding indicators. Extensive experiments demonstrate our feasibility and outperformance. The proposed module has a high consistency of 88.298% compared with annotation ground truth, and a strong correlation of 0.879 compared with SLAM tracking performance. Moreover, we show the prototype SLAM based on CEMS with better performance and the first comprehensive CET (Challenge Evaluation Table) for common SLAM datasets (EuRoC, KITTI, etc.) with objective and fair evaluations of various challenges. We make it available online to benefit the community on our website.</p>","PeriodicalId":54794,"journal":{"name":"Journal of Intelligent & Robotic Systems","volume":"71 1","pages":""},"PeriodicalIF":3.3,"publicationDate":"2024-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140097731","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-07DOI: 10.1007/s10846-024-02073-8
Wenwei Qiu, Dacheng Zhou, Wenbo Hui, Afimbo Reuben Kwabena, Yubo Xing, Yi Qian, Quan Li, Huayan Pu, Yangmin Xie
Coverage path planning (CPP) is in great demand with applications in agriculture, mining, manufacturing, etc. Most research in this area focused on 2D CPP problems solving the coverage problem with irregular 2D maps. Comparatively, CPP on uneven terrains is not fully solved. When there are many slopy areas in the working field, it is necessary to adjust the path shape and make it adapt to the 3D terrain surface to save energy consumption. This article proposes a terrain-shape-adaptive CPP method with three significant features. First, the paths grow by themselves according to the local terrain surface shapes. Second, the growth rule utilizes the 3D terrain traversability analysis, which makes them automatically avoid entering hazardous zones. Third, the irregularly distributed paths are connected under an optimal sequence with an improved genetic algorithm. As a result, the method can provide an autonomously growing terrain-adaptive coverage path with high energy efficiency and coverage rate compared to previous research works. It is demonstrated on various maps and is proven to be robust to terrain conditions.
{"title":"Terrain-Shape-Adaptive Coverage Path Planning With Traversability Analysis","authors":"Wenwei Qiu, Dacheng Zhou, Wenbo Hui, Afimbo Reuben Kwabena, Yubo Xing, Yi Qian, Quan Li, Huayan Pu, Yangmin Xie","doi":"10.1007/s10846-024-02073-8","DOIUrl":"https://doi.org/10.1007/s10846-024-02073-8","url":null,"abstract":"<p>Coverage path planning (CPP) is in great demand with applications in agriculture, mining, manufacturing, etc. Most research in this area focused on 2D CPP problems solving the coverage problem with irregular 2D maps. Comparatively, CPP on uneven terrains is not fully solved. When there are many slopy areas in the working field, it is necessary to adjust the path shape and make it adapt to the 3D terrain surface to save energy consumption. This article proposes a terrain-shape-adaptive CPP method with three significant features. First, the paths grow by themselves according to the local terrain surface shapes. Second, the growth rule utilizes the 3D terrain traversability analysis, which makes them automatically avoid entering hazardous zones. Third, the irregularly distributed paths are connected under an optimal sequence with an improved genetic algorithm. As a result, the method can provide an autonomously growing terrain-adaptive coverage path with high energy efficiency and coverage rate compared to previous research works. It is demonstrated on various maps and is proven to be robust to terrain conditions.</p>","PeriodicalId":54794,"journal":{"name":"Journal of Intelligent & Robotic Systems","volume":"22 1","pages":""},"PeriodicalIF":3.3,"publicationDate":"2024-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140055301","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-23DOI: 10.1007/s10846-023-02043-6
Kelen C. T. Vivaldini, Tatiana F. P. A. T. Pazelli, Lidia G. S. Rocha, Igor A. D. Santos, Kenny A. Q. Caldas, Diego P. Soler, João R. S. Benevides, Paulo V. G. Simplício, André C. Hernandes, Kleber O. Andrade, Pedro H. C. Kim, Isaac G. Alvarez, Eduardo V. Nascimento, Marcela A. A. Santos, Aline G. Almeida, Lucas H. G. Cavalcanti, Roberto S. Inoue, Marco H. Terra, Marcelo Becker
Aquatic macrophyte is a generic denomination for macro-algae with active photosynthetic parts that remain totally or partially submerged in fresh or salty water, in rivers and lakes. Currently, algae monitoring is carried out manually by collecting samples to send for laboratory analysis. In most cases, harmful algal blooms are already widespread when the results are disclosed. This paper proposes the application of a team of heterogeneous Unmanned Aerial Vehicles (UAVs) that cooperate to increase the system’s overall observation range and reduce the reaction time. Leader UAV, featured with a deep-learning-based vision system, covers a pre-determined region and determines high-interest inspection areas in real-time. Through a multi-robot Informative Path Planning (MIPP) approach, the leader UAV coordinates a team of customized quadcopter (named ART2) to reach points of interest, managing their route dynamically. ART2s are able to land on water, and collect and test samples in situ by applying phosphorescence sensors. While path planning, task assignment, and route management are centralized operations, each UAV is conducted by a decentralized trajectory tracking control. Simulations performed in a realistic environment implemented on the Unity platform and experimental proof of concepts demonstrated the reliability of the proposed approach. The presented multi-UAV framework with heterogeneous agents also enables the reconfiguration and expansion of specific objectives, in addition to minimizing the task completion time by executing different processes in parallel. This preventive monitoring enables a plague control action in advance, solving it faster, cheaper, and more effectively.
{"title":"Multi-UAV Collaborative System for the Identification of Surface Cyanobacterial Blooms and Aquatic Macrophytes","authors":"Kelen C. T. Vivaldini, Tatiana F. P. A. T. Pazelli, Lidia G. S. Rocha, Igor A. D. Santos, Kenny A. Q. Caldas, Diego P. Soler, João R. S. Benevides, Paulo V. G. Simplício, André C. Hernandes, Kleber O. Andrade, Pedro H. C. Kim, Isaac G. Alvarez, Eduardo V. Nascimento, Marcela A. A. Santos, Aline G. Almeida, Lucas H. G. Cavalcanti, Roberto S. Inoue, Marco H. Terra, Marcelo Becker","doi":"10.1007/s10846-023-02043-6","DOIUrl":"https://doi.org/10.1007/s10846-023-02043-6","url":null,"abstract":"<p>Aquatic macrophyte is a generic denomination for macro-algae with active photosynthetic parts that remain totally or partially submerged in fresh or salty water, in rivers and lakes. Currently, algae monitoring is carried out manually by collecting samples to send for laboratory analysis. In most cases, harmful algal blooms are already widespread when the results are disclosed. This paper proposes the application of a team of heterogeneous Unmanned Aerial Vehicles (UAVs) that cooperate to increase the system’s overall observation range and reduce the reaction time. Leader UAV, featured with a deep-learning-based vision system, covers a pre-determined region and determines high-interest inspection areas in real-time. Through a multi-robot Informative Path Planning (MIPP) approach, the leader UAV coordinates a team of customized quadcopter (named ART2) to reach points of interest, managing their route dynamically. ART2s are able to land on water, and collect and test samples in situ by applying phosphorescence sensors. While path planning, task assignment, and route management are centralized operations, each UAV is conducted by a decentralized trajectory tracking control. Simulations performed in a realistic environment implemented on the Unity platform and experimental proof of concepts demonstrated the reliability of the proposed approach. The presented multi-UAV framework with heterogeneous agents also enables the reconfiguration and expansion of specific objectives, in addition to minimizing the task completion time by executing different processes in parallel. This preventive monitoring enables a plague control action in advance, solving it faster, cheaper, and more effectively.</p>","PeriodicalId":54794,"journal":{"name":"Journal of Intelligent & Robotic Systems","volume":"43 1","pages":""},"PeriodicalIF":3.3,"publicationDate":"2024-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139954308","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}