首页 > 最新文献

Biomimetic Intelligence and Robotics最新文献

英文 中文
An improved path planning and tracking control method for planetary exploration rovers with traversable tolerance 具有可穿越公差的行星探测车的改进路径规划和跟踪控制方法
Pub Date : 2025-02-15 DOI: 10.1016/j.birob.2025.100219
Haojie Zhang, Feng Jiang, Qing Li
In order to ensure the safety and efficiency of planetary exploration rovers, path planning and tracking control of a planetary rover are expected to consider factors such as complex 3D terrain features, the motion constraints of the rover, traversability, etc. An improved path planning and tracking control method is proposed for planetary exploration rovers on rough terrain in this paper. Firstly, the kinematic model of the planetary rover is established. A 3D motion primitives library adapted to various terrains and the rover’s orientations is generated. The state expansion process and heuristic function of the A* algorithm are improved using the motion primitives and terrain features. Global path is generated by improved A*-based algorithm that satisfies the planetary rover’s kinematic constraints and the 3D terrain restrictions. Subsequently, an optional arc path set is designed based on the traversable capabilities of the planetary rover. Each arc path corresponds to a specific motion that determines the linear and angular velocities of the planetary rover. The optimal path is selected through the multi-objective evaluation function. The planetary rover is driven to accurately track the global path by sending optimal commands that corresponds to the optimal path for real-time obstacle avoidance. Finally, the path planning and tracking control method is effectively validated during a given mission through two simulation tests. The experiment results show that the improved A*-based algorithm reduces planning time by 30.05% and generates smoother paths than the classic A* algorithm. The multi-objective arc-based method improves the rover’s motion efficiency, ensuring safer and quicker mission completion along the global path.
{"title":"An improved path planning and tracking control method for planetary exploration rovers with traversable tolerance","authors":"Haojie Zhang,&nbsp;Feng Jiang,&nbsp;Qing Li","doi":"10.1016/j.birob.2025.100219","DOIUrl":"10.1016/j.birob.2025.100219","url":null,"abstract":"<div><div>In order to ensure the safety and efficiency of planetary exploration rovers, path planning and tracking control of a planetary rover are expected to consider factors such as complex 3D terrain features, the motion constraints of the rover, traversability, etc. An improved path planning and tracking control method is proposed for planetary exploration rovers on rough terrain in this paper. Firstly, the kinematic model of the planetary rover is established. A 3D motion primitives library adapted to various terrains and the rover’s orientations is generated. The state expansion process and heuristic function of the A* algorithm are improved using the motion primitives and terrain features. Global path is generated by improved A*-based algorithm that satisfies the planetary rover’s kinematic constraints and the 3D terrain restrictions. Subsequently, an optional arc path set is designed based on the traversable capabilities of the planetary rover. Each arc path corresponds to a specific motion that determines the linear and angular velocities of the planetary rover. The optimal path is selected through the multi-objective evaluation function. The planetary rover is driven to accurately track the global path by sending optimal commands that corresponds to the optimal path for real-time obstacle avoidance. Finally, the path planning and tracking control method is effectively validated during a given mission through two simulation tests. The experiment results show that the improved A*-based algorithm reduces planning time by 30.05% and generates smoother paths than the classic A* algorithm. The multi-objective arc-based method improves the rover’s motion efficiency, ensuring safer and quicker mission completion along the global path.</div></div>","PeriodicalId":100184,"journal":{"name":"Biomimetic Intelligence and Robotics","volume":"5 2","pages":"Article 100219"},"PeriodicalIF":0.0,"publicationDate":"2025-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143511683","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Human-in-the-loop transfer learning in collision avoidance of autonomous robots
Pub Date : 2025-01-28 DOI: 10.1016/j.birob.2025.100215
Minako Oriyama , Pitoyo Hartono , Hideyuki Sawada
Neural networks have demonstrated exceptional performance across a range of applications. Yet, their training often demands substantial time and data resources, presenting a challenge for autonomous robots operating in real-world environments where real-time learning is difficult. To mitigate this constraint, we propose a novel human-in-the-loop framework that harnesses human expertise to mitigate the learning challenges of autonomous robots. Our approach centers on directly incorporating human knowledge and insights into the robot’s learning pipeline. The proposed framework incorporates a mechanism for autonomous learning from the environment via reinforcement learning, utilizing a pre-trained model that encapsulates human knowledge as its foundation. By integrating human-provided knowledge and evaluation, we aim to bridge the division between human intuition and machine learning capabilities. Through a series of collision avoidance experiments, we validated that incorporating human knowledge significantly improves both learning efficiency and generalization capabilities. This collaborative learning paradigm enables robots to utilize human common sense and domain-specific expertise, resulting in faster convergence and better performance in complex environments. This research contributes to the development of more efficient and adaptable autonomous robots and seeks to analyze how humans can effectively participate in robot learning and the effects of such participation, illuminating the intricate interplay between human cognition and artificial intelligence.
{"title":"Human-in-the-loop transfer learning in collision avoidance of autonomous robots","authors":"Minako Oriyama ,&nbsp;Pitoyo Hartono ,&nbsp;Hideyuki Sawada","doi":"10.1016/j.birob.2025.100215","DOIUrl":"10.1016/j.birob.2025.100215","url":null,"abstract":"<div><div>Neural networks have demonstrated exceptional performance across a range of applications. Yet, their training often demands substantial time and data resources, presenting a challenge for autonomous robots operating in real-world environments where real-time learning is difficult. To mitigate this constraint, we propose a novel human-in-the-loop framework that harnesses human expertise to mitigate the learning challenges of autonomous robots. Our approach centers on directly incorporating human knowledge and insights into the robot’s learning pipeline. The proposed framework incorporates a mechanism for autonomous learning from the environment via reinforcement learning, utilizing a pre-trained model that encapsulates human knowledge as its foundation. By integrating human-provided knowledge and evaluation, we aim to bridge the division between human intuition and machine learning capabilities. Through a series of collision avoidance experiments, we validated that incorporating human knowledge significantly improves both learning efficiency and generalization capabilities. This collaborative learning paradigm enables robots to utilize human common sense and domain-specific expertise, resulting in faster convergence and better performance in complex environments. This research contributes to the development of more efficient and adaptable autonomous robots and seeks to analyze how humans can effectively participate in robot learning and the effects of such participation, illuminating the intricate interplay between human cognition and artificial intelligence.</div></div>","PeriodicalId":100184,"journal":{"name":"Biomimetic Intelligence and Robotics","volume":"5 1","pages":"Article 100215"},"PeriodicalIF":0.0,"publicationDate":"2025-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143387975","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Forward solution algorithm of Fracture reduction robots based on Newton-Genetic algorithm
Pub Date : 2025-01-28 DOI: 10.1016/j.birob.2025.100216
Jian Li , Xiangyan Zhang , Yadong Mo , Guang Yang , Yun Dai , Chengyu Lv , Ying Zhang , Shimin Wei
The Fracture Reduction Robot (FRR) is a crucial component of robot-assisted fracture correction technology. However, long-term clinical experiments have identified significant challenges with the forward kinematics of the parallel FRR, notably slow computation speeds and low precision. To address these issues, this paper proposes a hybrid algorithm that integrates the Newton method with a genetic algorithm. This approach harnesses the rapid computation and high precision of the Newton method alongside the strong global convergence capabilities of the genetic algorithm. To comprehensively evaluate the performance of the proposed algorithm, comparisons are made against the analytical method and the Additional Sensor Algorithm (ASA) using identical computational examples. Additionally, iterative comparisons of iteration counts and precision are conducted between traditional numerical methods and the Newton-Genetic algorithm. Experimental results show that the Newton-Genetic algorithm achieves a balance between computation speed and precision, with an accuracy reaching the 104mm order of magnitude, effectively meeting the clinical requirements for fracture reduction robots in medical correction.
{"title":"Forward solution algorithm of Fracture reduction robots based on Newton-Genetic algorithm","authors":"Jian Li ,&nbsp;Xiangyan Zhang ,&nbsp;Yadong Mo ,&nbsp;Guang Yang ,&nbsp;Yun Dai ,&nbsp;Chengyu Lv ,&nbsp;Ying Zhang ,&nbsp;Shimin Wei","doi":"10.1016/j.birob.2025.100216","DOIUrl":"10.1016/j.birob.2025.100216","url":null,"abstract":"<div><div>The Fracture Reduction Robot (FRR) is a crucial component of robot-assisted fracture correction technology. However, long-term clinical experiments have identified significant challenges with the forward kinematics of the parallel FRR, notably slow computation speeds and low precision. To address these issues, this paper proposes a hybrid algorithm that integrates the Newton method with a genetic algorithm. This approach harnesses the rapid computation and high precision of the Newton method alongside the strong global convergence capabilities of the genetic algorithm. To comprehensively evaluate the performance of the proposed algorithm, comparisons are made against the analytical method and the Additional Sensor Algorithm (ASA) using identical computational examples. Additionally, iterative comparisons of iteration counts and precision are conducted between traditional numerical methods and the Newton-Genetic algorithm. Experimental results show that the Newton-Genetic algorithm achieves a balance between computation speed and precision, with an accuracy reaching the 10<span><math><mrow><msup><mrow></mrow><mrow><mo>−</mo><mn>4</mn></mrow></msup><mspace></mspace><mi>mm</mi></mrow></math></span> order of magnitude, effectively meeting the clinical requirements for fracture reduction robots in medical correction.</div></div>","PeriodicalId":100184,"journal":{"name":"Biomimetic Intelligence and Robotics","volume":"5 2","pages":"Article 100216"},"PeriodicalIF":0.0,"publicationDate":"2025-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143465049","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SoftGrasp: Adaptive grasping for dexterous hand based on multimodal imitation learning
Pub Date : 2025-01-28 DOI: 10.1016/j.birob.2025.100217
Yihong Li, Ce Guo, Junkai Ren, Bailiang Chen, Chuang Cheng, Hui Zhang, Huimin Lu
Biomimetic grasping is crucial for robots to interact with the environment and perform complex tasks, making it a key focus in robotics and embodied intelligence. However, achieving human-level finger coordination and force control remains challenging due to the need for multimodal perception, including visual, kinesthetic, and tactile feedback. Although some recent approaches have demonstrated remarkable performance in grasping diverse objects, they often rely on expensive tactile sensors or are restricted to rigid objects. To address these challenges, we introduce SoftGrasp, a novel multimodal imitation learning approach for adaptive, multi-stage grasping of objects with varying sizes, shapes, and hardness. First, we develop an immersive demonstration platform with force feedback to collect rich, human-like grasping datasets. Inspired by human proprioceptive manipulation, this platform gathers multimodal signals, including visual images, robot finger joint angles, and joint torques, during demonstrations. Next, we utilize a multi-head attention mechanism to align and integrate multimodal features, dynamically allocating attention to ensure comprehensive learning. On this basis, we design a behavior cloning method based on an angle-torque loss function, enabling multimodal imitation learning. Finally, we validate SoftGrasp in extensive experiments across various scenarios, demonstrating its ability to adaptively adjust joint forces and finger angles based on real-time inputs. These capabilities result in a 98% success rate in real-world experiments, achieving dexterous and stable grasping. Source code and demonstration videos are available at https://github.com/nubot-nudt/SoftGrasp.
{"title":"SoftGrasp: Adaptive grasping for dexterous hand based on multimodal imitation learning","authors":"Yihong Li,&nbsp;Ce Guo,&nbsp;Junkai Ren,&nbsp;Bailiang Chen,&nbsp;Chuang Cheng,&nbsp;Hui Zhang,&nbsp;Huimin Lu","doi":"10.1016/j.birob.2025.100217","DOIUrl":"10.1016/j.birob.2025.100217","url":null,"abstract":"<div><div>Biomimetic grasping is crucial for robots to interact with the environment and perform complex tasks, making it a key focus in robotics and embodied intelligence. However, achieving human-level finger coordination and force control remains challenging due to the need for multimodal perception, including visual, kinesthetic, and tactile feedback. Although some recent approaches have demonstrated remarkable performance in grasping diverse objects, they often rely on expensive tactile sensors or are restricted to rigid objects. To address these challenges, we introduce SoftGrasp, a novel multimodal imitation learning approach for adaptive, multi-stage grasping of objects with varying sizes, shapes, and hardness. First, we develop an immersive demonstration platform with force feedback to collect rich, human-like grasping datasets. Inspired by human proprioceptive manipulation, this platform gathers multimodal signals, including visual images, robot finger joint angles, and joint torques, during demonstrations. Next, we utilize a multi-head attention mechanism to align and integrate multimodal features, dynamically allocating attention to ensure comprehensive learning. On this basis, we design a behavior cloning method based on an angle-torque loss function, enabling multimodal imitation learning. Finally, we validate SoftGrasp in extensive experiments across various scenarios, demonstrating its ability to adaptively adjust joint forces and finger angles based on real-time inputs. These capabilities result in a 98% success rate in real-world experiments, achieving dexterous and stable grasping. Source code and demonstration videos are available at <span><span>https://github.com/nubot-nudt/SoftGrasp</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":100184,"journal":{"name":"Biomimetic Intelligence and Robotics","volume":"5 2","pages":"Article 100217"},"PeriodicalIF":0.0,"publicationDate":"2025-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143509111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fuzzy adaptive variable impedance control on deformable shield of defecation smart care robot
Pub Date : 2025-01-25 DOI: 10.1016/j.birob.2025.100214
Lingling Chen , Pengyue Lai , Yanglong Wang , Yuxin Dong
Precise control of the contact force is crucial in the application of non-wearable defecation smart care (DSC) robot. A deformable shield equipped with a pressure sensing function is designed, with a bending angle that can be adjusted according to pressure feedback, thus enabling it to adapt to various body shapes. To improve the force tracking accuracy and prevent obvious force overshoot in the initial contact stage, a contact force control strategy based on fuzzy adaptive variable impedance is proposed. The proposed contact force control strategy achieves an average root-mean-square error of 0.024 and an average overshoot of 1.74%. Experimental results demonstrate that the designed deformable shield can fit the human body well, while the proposed control strategy enhances the contact force management and realizes the precise control of human–robot contact force.
{"title":"Fuzzy adaptive variable impedance control on deformable shield of defecation smart care robot","authors":"Lingling Chen ,&nbsp;Pengyue Lai ,&nbsp;Yanglong Wang ,&nbsp;Yuxin Dong","doi":"10.1016/j.birob.2025.100214","DOIUrl":"10.1016/j.birob.2025.100214","url":null,"abstract":"<div><div>Precise control of the contact force is crucial in the application of non-wearable defecation smart care (DSC) robot. A deformable shield equipped with a pressure sensing function is designed, with a bending angle that can be adjusted according to pressure feedback, thus enabling it to adapt to various body shapes. To improve the force tracking accuracy and prevent obvious force overshoot in the initial contact stage, a contact force control strategy based on fuzzy adaptive variable impedance is proposed. The proposed contact force control strategy achieves an average root-mean-square error of 0.024 and an average overshoot of 1.74%. Experimental results demonstrate that the designed deformable shield can fit the human body well, while the proposed control strategy enhances the contact force management and realizes the precise control of human–robot contact force.</div></div>","PeriodicalId":100184,"journal":{"name":"Biomimetic Intelligence and Robotics","volume":"5 2","pages":"Article 100214"},"PeriodicalIF":0.0,"publicationDate":"2025-01-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143394584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning-based locomotion control fusing multimodal perception for a bipedal humanoid robot
Pub Date : 2025-01-18 DOI: 10.1016/j.birob.2025.100213
Chao Ji , Diyuan Liu , Wei Gao , Shiwu Zhang
The ability of bipedal humanoid robots to walk adaptively on varied terrain is a critical challenge for practical applications, drawing substantial attention from academic and industrial research communities in recent years. Traditional model-based locomotion control methods have high modeling complexity, especially in complex terrain environments, making locomotion stability difficult to ensure. Reinforcement learning offers an end-to-end solution for locomotion control in humanoid robots. This approach typically relies solely on proprioceptive sensing to generate control policies, often resulting in increased robot body collisions during practical applications. Excessive collisions can damage the biped robot hardware, and more critically, the absence of multimodal input, such as vision, limits the robot’s ability to perceive environmental context and adjust its gait trajectory promptly. This lack of multimodal perception also hampers stability and robustness during tasks. In this paper, visual information is added to the locomotion control problem of humanoid robot, and a three-stage multi-objective constraint policy distillation optimization algorithm is innovantly proposed. The expert policies of different terrains to meet the requirements of gait aesthetics are trained through reinforcement learning, and these expert policies are distilled into student through policy distillation. Experimental results demonstrate a significant reduction in collision rates when utilizing a control policy that integrates multimodal perception, especially in challenging terrains like stairs, thresholds, and mixed surfaces. This advancement supports the practical deployment of bipedal humanoid robots.
{"title":"Learning-based locomotion control fusing multimodal perception for a bipedal humanoid robot","authors":"Chao Ji ,&nbsp;Diyuan Liu ,&nbsp;Wei Gao ,&nbsp;Shiwu Zhang","doi":"10.1016/j.birob.2025.100213","DOIUrl":"10.1016/j.birob.2025.100213","url":null,"abstract":"<div><div>The ability of bipedal humanoid robots to walk adaptively on varied terrain is a critical challenge for practical applications, drawing substantial attention from academic and industrial research communities in recent years. Traditional model-based locomotion control methods have high modeling complexity, especially in complex terrain environments, making locomotion stability difficult to ensure. Reinforcement learning offers an end-to-end solution for locomotion control in humanoid robots. This approach typically relies solely on proprioceptive sensing to generate control policies, often resulting in increased robot body collisions during practical applications. Excessive collisions can damage the biped robot hardware, and more critically, the absence of multimodal input, such as vision, limits the robot’s ability to perceive environmental context and adjust its gait trajectory promptly. This lack of multimodal perception also hampers stability and robustness during tasks. In this paper, visual information is added to the locomotion control problem of humanoid robot, and a three-stage multi-objective constraint policy distillation optimization algorithm is innovantly proposed. The expert policies of different terrains to meet the requirements of gait aesthetics are trained through reinforcement learning, and these expert policies are distilled into student through policy distillation. Experimental results demonstrate a significant reduction in collision rates when utilizing a control policy that integrates multimodal perception, especially in challenging terrains like stairs, thresholds, and mixed surfaces. This advancement supports the practical deployment of bipedal humanoid robots.</div></div>","PeriodicalId":100184,"journal":{"name":"Biomimetic Intelligence and Robotics","volume":"5 1","pages":"Article 100213"},"PeriodicalIF":0.0,"publicationDate":"2025-01-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143378305","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Human-like dexterous manipulation for anthropomorphic five-fingered hands: A review
Pub Date : 2025-01-14 DOI: 10.1016/j.birob.2025.100212
Yayu Huang , Dongxuan Fan , Haonan Duan , Dashun Yan , Wen Qi , Jia Sun , Qian Liu , Peng Wang
Humans excel at dexterous manipulation; however, achieving human-level dexterity remains a significant challenge for robots. Technological breakthroughs in the design of anthropomorphic robotic hands, as well as advancements in visual and tactile perception, have demonstrated significant advantages in addressing this issue. However, coping with the inevitable uncertainty caused by unstructured and dynamic environments in human-like dexterous manipulation tasks, especially for anthropomorphic five-fingered hands, remains an open problem. In this paper, we present a focused review of human-like dexterous manipulation for anthropomorphic five-fingered hands. We begin by defining human-like dexterity and outlining the tasks associated with human-like robot dexterous manipulation. Subsequently, we delve into anthropomorphism and anthropomorphic five-fingered hands, covering definitions, robotic design, and evaluation criteria. Furthermore, we review the learning methods for achieving human-like dexterity in anthropomorphic five-fingered hands, including imitation learning, reinforcement learning and their integration. Finally, we discuss the existing challenges and propose future research directions. This review aims to stimulate interest in scientific research and future applications.
{"title":"Human-like dexterous manipulation for anthropomorphic five-fingered hands: A review","authors":"Yayu Huang ,&nbsp;Dongxuan Fan ,&nbsp;Haonan Duan ,&nbsp;Dashun Yan ,&nbsp;Wen Qi ,&nbsp;Jia Sun ,&nbsp;Qian Liu ,&nbsp;Peng Wang","doi":"10.1016/j.birob.2025.100212","DOIUrl":"10.1016/j.birob.2025.100212","url":null,"abstract":"<div><div>Humans excel at dexterous manipulation; however, achieving human-level dexterity remains a significant challenge for robots. Technological breakthroughs in the design of anthropomorphic robotic hands, as well as advancements in visual and tactile perception, have demonstrated significant advantages in addressing this issue. However, coping with the inevitable uncertainty caused by unstructured and dynamic environments in human-like dexterous manipulation tasks, especially for anthropomorphic five-fingered hands, remains an open problem. In this paper, we present a focused review of human-like dexterous manipulation for anthropomorphic five-fingered hands. We begin by defining human-like dexterity and outlining the tasks associated with human-like robot dexterous manipulation. Subsequently, we delve into anthropomorphism and anthropomorphic five-fingered hands, covering definitions, robotic design, and evaluation criteria. Furthermore, we review the learning methods for achieving human-like dexterity in anthropomorphic five-fingered hands, including imitation learning, reinforcement learning and their integration. Finally, we discuss the existing challenges and propose future research directions. This review aims to stimulate interest in scientific research and future applications.</div></div>","PeriodicalId":100184,"journal":{"name":"Biomimetic Intelligence and Robotics","volume":"5 1","pages":"Article 100212"},"PeriodicalIF":0.0,"publicationDate":"2025-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143420371","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Editorial for the special issue on biomimetic soft robotics: Actuation, sensing, and integration
Pub Date : 2025-01-14 DOI: 10.1016/j.birob.2025.100211
Ming Jiang, Muhao Chen, Dongbo Zhou, Zebing Mao
{"title":"Editorial for the special issue on biomimetic soft robotics: Actuation, sensing, and integration","authors":"Ming Jiang,&nbsp;Muhao Chen,&nbsp;Dongbo Zhou,&nbsp;Zebing Mao","doi":"10.1016/j.birob.2025.100211","DOIUrl":"10.1016/j.birob.2025.100211","url":null,"abstract":"","PeriodicalId":100184,"journal":{"name":"Biomimetic Intelligence and Robotics","volume":"5 1","pages":"Article 100211"},"PeriodicalIF":0.0,"publicationDate":"2025-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143402605","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A guided approach for cross-view geolocalization estimation with land cover semantic segmentation
Pub Date : 2025-01-11 DOI: 10.1016/j.birob.2024.100208
Nathan A.Z. Xavier , Elcio H. Shiguemori , Marcos R.O.A. Maximo , Mubarak Shah
Geolocalization is a crucial process that leverages environmental information and contextual data to accurately identify a position. In particular, cross-view geolocalization utilizes images from various perspectives, such as satellite and ground-level images, which are relevant for applications like robotics navigation and autonomous navigation. In this research, we propose a methodology that integrates cross-view geolocalization estimation with a land cover semantic segmentation map. Our solution demonstrates comparable performance to state-of-the-art methods, exhibiting enhanced stability and consistency regardless of the street view location or the dataset used. Additionally, our method generates a focused discrete probability distribution that acts as a heatmap. This heatmap effectively filters out incorrect and unlikely regions, enhancing the reliability of our estimations. Code is available at https://github.com/nathanxavier/CVSegGuide.
{"title":"A guided approach for cross-view geolocalization estimation with land cover semantic segmentation","authors":"Nathan A.Z. Xavier ,&nbsp;Elcio H. Shiguemori ,&nbsp;Marcos R.O.A. Maximo ,&nbsp;Mubarak Shah","doi":"10.1016/j.birob.2024.100208","DOIUrl":"10.1016/j.birob.2024.100208","url":null,"abstract":"<div><div>Geolocalization is a crucial process that leverages environmental information and contextual data to accurately identify a position. In particular, cross-view geolocalization utilizes images from various perspectives, such as satellite and ground-level images, which are relevant for applications like robotics navigation and autonomous navigation. In this research, we propose a methodology that integrates cross-view geolocalization estimation with a land cover semantic segmentation map. Our solution demonstrates comparable performance to state-of-the-art methods, exhibiting enhanced stability and consistency regardless of the street view location or the dataset used. Additionally, our method generates a focused discrete probability distribution that acts as a heatmap. This heatmap effectively filters out incorrect and unlikely regions, enhancing the reliability of our estimations. Code is available at <span><span>https://github.com/nathanxavier/CVSegGuide</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":100184,"journal":{"name":"Biomimetic Intelligence and Robotics","volume":"5 2","pages":"Article 100208"},"PeriodicalIF":0.0,"publicationDate":"2025-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143379049","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimization-based UWB positioning with multiple tags for estimating position and rotation simultaneously
Pub Date : 2025-01-10 DOI: 10.1016/j.birob.2025.100210
Hao Chen , Bo Yang , Luyang Li , Tao Liu , Jiacheng Zhang , Ying Zhang
Currently, the ultra-wideband (UWB) positioning scheme is widely applied to indoor robot positioning and has achieved high positioning accuracy. However, in some narrow and complex environments, its accuracy is still significantly degraded by the multipath effect or non-line-of-sight situations. In addition, the current single tag-based pure UWB positioning methods only estimate the tag position and ignore the rotation estimation of the robot. Therefore, in this paper, we propose a multiple tags-based UWB positioning method to estimate the position and rotation simultaneously, and further improve the position estimation accuracy. To be specific, we first install four fixed tags on the robot. Then, based on the ranging measurements, anchor positions and geometric relationships between each tag, we design five different geometric constraints and smooth constraints to build a whole optimization function. With this optimization function, both the rotations and positions at each time step can be estimated by the iterative optimization algorithm, and the results of tag positions can be improved. Both simulation and real-world experiments are conducted to evaluate the proposed method. Furthermore, we also explore the effect of relative distances between multiple tags on the rotations in the experiments. The experimental results suggest that the proposed method can effectively improve the position estimation performance, while the large relative distances between multiple tags benefit the rotation estimation.
{"title":"Optimization-based UWB positioning with multiple tags for estimating position and rotation simultaneously","authors":"Hao Chen ,&nbsp;Bo Yang ,&nbsp;Luyang Li ,&nbsp;Tao Liu ,&nbsp;Jiacheng Zhang ,&nbsp;Ying Zhang","doi":"10.1016/j.birob.2025.100210","DOIUrl":"10.1016/j.birob.2025.100210","url":null,"abstract":"<div><div>Currently, the ultra-wideband (UWB) positioning scheme is widely applied to indoor robot positioning and has achieved high positioning accuracy. However, in some narrow and complex environments, its accuracy is still significantly degraded by the multipath effect or non-line-of-sight situations. In addition, the current single tag-based pure UWB positioning methods only estimate the tag position and ignore the rotation estimation of the robot. Therefore, in this paper, we propose a multiple tags-based UWB positioning method to estimate the position and rotation simultaneously, and further improve the position estimation accuracy. To be specific, we first install four fixed tags on the robot. Then, based on the ranging measurements, anchor positions and geometric relationships between each tag, we design five different geometric constraints and smooth constraints to build a whole optimization function. With this optimization function, both the rotations and positions at each time step can be estimated by the iterative optimization algorithm, and the results of tag positions can be improved. Both simulation and real-world experiments are conducted to evaluate the proposed method. Furthermore, we also explore the effect of relative distances between multiple tags on the rotations in the experiments. The experimental results suggest that the proposed method can effectively improve the position estimation performance, while the large relative distances between multiple tags benefit the rotation estimation.</div></div>","PeriodicalId":100184,"journal":{"name":"Biomimetic Intelligence and Robotics","volume":"5 2","pages":"Article 100210"},"PeriodicalIF":0.0,"publicationDate":"2025-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143422513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Biomimetic Intelligence and Robotics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1