Pub Date : 2023-04-12DOI: 10.1007/s10514-023-10096-7
Rahaf Rahal, Amir M. Ghalamzan-E., Firas Abi-Farraj, Claudio Pacchierotti, Paolo Robuffo Giordano
Teleoperating robotic manipulators can be complicated and cognitively demanding for the human operator. Despite these difficulties, teleoperated robotic systems are still popular in several industrial applications, e.g., remote handling of hazardous material. In this context, we present a novel haptic shared control method for minimising the manipulator torque effort during remote manipulative actions in which an operator is assisted in selecting a suitable grasping pose for then displacing an object along a desired trajectory. Minimising torque is important because it reduces the system operating cost and extends the range of objects that can be manipulated. We demonstrate the effectiveness of the proposed approach in a series of representative real-world pick-and-place experiments as well as in a human subjects study. The reported results prove the effectiveness of our shared control vs. a standard teleoperation approach. We also find that haptic-only guidance performs better than visual-only guidance, although combining them together leads to the best overall results.
{"title":"Haptic-guided grasping to minimise torque effort during robotic telemanipulation","authors":"Rahaf Rahal, Amir M. Ghalamzan-E., Firas Abi-Farraj, Claudio Pacchierotti, Paolo Robuffo Giordano","doi":"10.1007/s10514-023-10096-7","DOIUrl":"10.1007/s10514-023-10096-7","url":null,"abstract":"<div><p>Teleoperating robotic manipulators can be complicated and cognitively demanding for the human operator. Despite these difficulties, teleoperated robotic systems are still popular in several industrial applications, e.g., remote handling of hazardous material. In this context, we present a novel haptic shared control method for minimising the manipulator torque effort during remote manipulative actions in which an operator is assisted in selecting a suitable grasping pose for then displacing an object along a desired trajectory. Minimising torque is important because it reduces the system operating cost and extends the range of objects that can be manipulated. We demonstrate the effectiveness of the proposed approach in a series of representative real-world pick-and-place experiments as well as in a human subjects study. The reported results prove the effectiveness of our shared control vs. a standard teleoperation approach. We also find that haptic-only guidance performs better than visual-only guidance, although combining them together leads to the best overall results.\u0000</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":"47 4","pages":"405 - 423"},"PeriodicalIF":3.5,"publicationDate":"2023-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48097502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-11DOI: 10.1007/s10514-023-10101-z
Dimitrios Dimou, José Santos-Victor, Plinio Moreno
We develop a conditional generative model to represent dexterous grasp postures of a robotic hand and use it to generate in-hand regrasp trajectories. Our model learns to encode the robotic grasp postures into a low-dimensional space, called Synergy Space, while taking into account additional information about the object such as its size and its shape category. We then generate regrasp trajectories through linear interpolation in this low-dimensional space. The result is that the hand configuration moves from one grasp type to another while keeping the object stable in the hand. We show that our model achieves higher success rate on in-hand regrasping compared to previous methods used for synergy extraction, by taking advantage of the grasp size conditional variable.
{"title":"Robotic hand synergies for in-hand regrasping driven by object information","authors":"Dimitrios Dimou, José Santos-Victor, Plinio Moreno","doi":"10.1007/s10514-023-10101-z","DOIUrl":"10.1007/s10514-023-10101-z","url":null,"abstract":"<div><p>We develop a conditional generative model to represent dexterous grasp postures of a robotic hand and use it to generate in-hand regrasp trajectories. Our model learns to encode the robotic grasp postures into a low-dimensional space, called Synergy Space, while taking into account additional information about the object such as its size and its shape category. We then generate regrasp trajectories through linear interpolation in this low-dimensional space. The result is that the hand configuration moves from one grasp type to another while keeping the object stable in the hand. We show that our model achieves higher success rate on in-hand regrasping compared to previous methods used for synergy extraction, by taking advantage of the grasp size conditional variable.</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":"47 4","pages":"453 - 464"},"PeriodicalIF":3.5,"publicationDate":"2023-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10514-023-10101-z.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46694831","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Robots operating in everyday environments need to effectively perceive, model, and infer semantic properties of objects. Existing knowledge reasoning frameworks only model binary relations between an object’s class label and its semantic properties, unable to collectively reason about object properties detected by different perception algorithms and grounded in diverse sensory modalities. We bridge the gap between multimodal perception and knowledge reasoning by introducing an n-ary representation that models complex, inter-related object properties. To tackle the problem of collecting n-ary semantic knowledge at scale, we propose transformer neural networks that generalize knowledge from observations of object instances by learning to predict single missing properties or predict joint probabilities of all properties. The learned models can reason at different levels of abstraction, effectively predicting unknown properties of objects in different environmental contexts given different amounts of observed information. We quantitatively validate our approach against prior methods on LINK, a unique dataset we contribute that contains 1457 object instances in different situations, amounting to 15 multimodal properties types and 200 total properties. Compared to the top-performing baseline, a Markov Logic Network, our models obtain a 10% improvement in predicting unknown properties of novel object instances while reducing training and inference time by more than 150 times. Additionally, we apply our work to a mobile manipulation robot, demonstrating its ability to leverage n-ary reasoning to retrieve objects and actively detect object properties. The code and data are available at https://github.com/wliu88/LINK.
{"title":"Learning instance-level N-ary semantic knowledge at scale for robots operating in everyday environments","authors":"Weiyu Liu, Dhruva Bansal, Angel Daruna, Sonia Chernova","doi":"10.1007/s10514-023-10099-4","DOIUrl":"10.1007/s10514-023-10099-4","url":null,"abstract":"<div><p>Robots operating in everyday environments need to effectively perceive, model, and infer semantic properties of objects. Existing knowledge reasoning frameworks only model binary relations between an object’s class label and its semantic properties, unable to collectively reason about object properties detected by different perception algorithms and grounded in diverse sensory modalities. We bridge the gap between multimodal perception and knowledge reasoning by introducing an n-ary representation that models complex, inter-related object properties. To tackle the problem of collecting n-ary semantic knowledge at scale, we propose transformer neural networks that generalize knowledge from observations of object instances by learning to predict single missing properties or predict joint probabilities of all properties. The learned models can reason at different levels of abstraction, effectively predicting unknown properties of objects in different environmental contexts given different amounts of observed information. We quantitatively validate our approach against prior methods on LINK, a unique dataset we contribute that contains 1457 object instances in different situations, amounting to 15 multimodal properties types and 200 total properties. Compared to the top-performing baseline, a Markov Logic Network, our models obtain a 10% improvement in predicting unknown properties of novel object instances while reducing training and inference time by more than 150 times. Additionally, we apply our work to a mobile manipulation robot, demonstrating its ability to leverage n-ary reasoning to retrieve objects and actively detect object properties. The code and data are available at https://github.com/wliu88/LINK.</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":"47 5","pages":"529 - 547"},"PeriodicalIF":3.5,"publicationDate":"2023-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46907553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Robots frequently need to perceive object attributes, such as red, heavy, and empty, using multimodal exploratory behaviors, such as look, lift, and shake. One possible way for robots to do so is to learn a classifier for each perceivable attribute given an exploratory behavior. Once the attribute classifiers are learned, they can be used by robots to select actions and identify attributes of new objects, answering questions, such as “Is this objectred andempty ?” In this article, we introduce a robot interactive perception problem, called Multimodal Embodied Attribute Learning (meal), and explore solutions to this new problem. Under different assumptions, there are two classes of meal problems. offline-meal problems are defined in this article as learning attribute classifiers from pre-collected data, and sequencing actions towards attribute identification under the challenging trade-off between information gains and exploration action costs. For this purpose, we introduce Mixed Observability Robot Control (morc), an algorithm for offline-meal problems, that dynamically constructs both fully and partially observable components of the state for multimodal attribute identification of objects. We further investigate a more challenging class of meal problems, called online-meal, where the robot assumes no pre-collected data, and works on both attribute classification and attribute identification at the same time. Based on morc, we develop an algorithm called Information-Theoretic Reward Shaping (morc-itrs) that actively addresses the trade-off between exploration and exploitation in online-meal problems. morc and morc-itrs are evaluated in comparison with competitive meal baselines, and results demonstrate the superiority of our methods in learning efficiency and identification accuracy.
{"title":"Multimodal embodied attribute learning by robots for object-centric action policies","authors":"Xiaohan Zhang, Saeid Amiri, Jivko Sinapov, Jesse Thomason, Peter Stone, Shiqi Zhang","doi":"10.1007/s10514-023-10098-5","DOIUrl":"10.1007/s10514-023-10098-5","url":null,"abstract":"<div><p>Robots frequently need to perceive object attributes, such as <span>red</span>, <span>heavy</span>, and <span>empty</span>, using multimodal exploratory behaviors, such as <i>look</i>, <i>lift</i>, and <i>shake</i>. One possible way for robots to do so is to learn a classifier for each perceivable attribute given an exploratory behavior. Once the attribute classifiers are learned, they can be used by robots to select actions and identify attributes of new objects, answering questions, such as “<i>Is this object</i> <span>red</span> <i> and</i> <span>empty</span> ?” In this article, we introduce a robot interactive perception problem, called <b>M</b>ultimodal <b>E</b>mbodied <b>A</b>ttribute <b>L</b>earning (<span>meal</span>), and explore solutions to this new problem. Under different assumptions, there are two classes of <span>meal</span> problems. <span>offline-meal</span> problems are defined in this article as learning attribute classifiers from pre-collected data, and sequencing actions towards attribute identification under the challenging trade-off between information gains and exploration action costs. For this purpose, we introduce <b>M</b>ixed <b>O</b>bservability <b>R</b>obot <b>C</b>ontrol (<span>morc</span>), an algorithm for <span>offline-meal</span> problems, that dynamically constructs both fully and partially observable components of the state for multimodal attribute identification of objects. We further investigate a more challenging class of <span>meal</span> problems, called <span>online-meal</span>, where the robot assumes no pre-collected data, and works on both attribute classification and attribute identification at the same time. Based on <span>morc</span>, we develop an algorithm called <b>I</b>nformation-<b>T</b>heoretic <b>R</b>eward <b>S</b>haping (<span>morc</span>-<span>itrs</span>) that actively addresses the trade-off between exploration and exploitation in <span>online-meal</span> problems. <span>morc</span> and <span>morc</span>-<span>itrs</span> are evaluated in comparison with competitive <span>meal</span> baselines, and results demonstrate the superiority of our methods in learning efficiency and identification accuracy.</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":"47 5","pages":"505 - 528"},"PeriodicalIF":3.5,"publicationDate":"2023-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46355867","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-20DOI: 10.1007/s10514-023-10093-w
Manabu Nakanoya, Sai Shankar Narasimhan, Sharachchandra Bhat, Alexandros Anemogiannis, Akul Datta, Sachin Katti, Sandeep Chinchali, Marco Pavone
Today, even the most compute-and-power constrained robots can measure complex, high data-rate video and LIDAR sensory streams. Often, such robots, ranging from low-power drones to space and subterranean rovers, need to transmit high-bitrate sensory data to a remote compute server if they are uncertain or cannot scalably run complex perception or mapping tasks locally. However, today’s representations for sensory data are mostly designed for human, not robotic, perception and thus often waste precious compute or wireless network resources to transmit unimportant parts of a scene that are unnecessary for a high-level robotic task. This paper presents an algorithm to learn task-relevant representations of sensory data that are co-designed with a pre-trained robotic perception model’s ultimate objective. Our algorithm aggressively compresses robotic sensory data by up to 11(times ) more than competing methods. Further, it achieves high accuracy and robust generalization on diverse tasks including Mars terrain classification with low-power deep learning accelerators, neural motion planning, and environmental timeseries classification.
{"title":"Co-design of communication and machine inference for cloud robotics","authors":"Manabu Nakanoya, Sai Shankar Narasimhan, Sharachchandra Bhat, Alexandros Anemogiannis, Akul Datta, Sachin Katti, Sandeep Chinchali, Marco Pavone","doi":"10.1007/s10514-023-10093-w","DOIUrl":"10.1007/s10514-023-10093-w","url":null,"abstract":"<div><p>Today, even the most compute-and-power constrained robots can measure complex, high data-rate video and LIDAR sensory streams. Often, such robots, ranging from low-power drones to space and subterranean rovers, need to transmit high-bitrate sensory data to a remote compute server if they are uncertain or cannot scalably run complex perception or mapping tasks locally. However, today’s representations for sensory data are mostly designed for <i>human, not robotic</i>, perception and thus often waste precious compute or wireless network resources to transmit unimportant parts of a scene that are unnecessary for a high-level robotic task. This paper presents an algorithm to learn <i>task-relevant</i> representations of sensory data that are co-designed with a pre-trained robotic perception model’s ultimate objective. Our algorithm aggressively compresses robotic sensory data by up to 11<span>(times )</span> more than competing methods. Further, it achieves high accuracy and robust generalization on diverse tasks including Mars terrain classification with low-power deep learning accelerators, neural motion planning, and environmental timeseries classification.</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":"47 5","pages":"579 - 594"},"PeriodicalIF":3.5,"publicationDate":"2023-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10514-023-10093-w.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41639268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-20DOI: 10.1007/s10514-023-10100-0
Paulo Rezeck, Héctor Azpúrua, Maurício F. S. Corrêa, Luiz Chaimowicz
The current state of electronic component miniaturization coupled with the increasing efficiency in hardware and software allow the development of smaller and compact robotic systems. The convenience of using these small, simple, yet capable robots has gathered the research community’s attention towards practical applications of swarm robotics. This paper presents the design of a novel platform for swarm robotics applications that is low cost, easy to assemble using off-the-shelf components, and deeply integrated with the most used robotic framework available today: ROS (Robot Operating System). The robotic platform is entirely open, composed of a 3D printed body and open-source software. We describe its architecture, present its main features, and evaluate its functionalities executing experiments using a couple of robots. Results demonstrate that the proposed mobile robot is capable of performing different swarm tasks, given its small size and reduced cost, being suitable for swarm robotics research and education.
{"title":"HeRo 2.0: a low-cost robot for swarm robotics research","authors":"Paulo Rezeck, Héctor Azpúrua, Maurício F. S. Corrêa, Luiz Chaimowicz","doi":"10.1007/s10514-023-10100-0","DOIUrl":"10.1007/s10514-023-10100-0","url":null,"abstract":"<div><p>The current state of electronic component miniaturization coupled with the increasing efficiency in hardware and software allow the development of smaller and compact robotic systems. The convenience of using these small, simple, yet capable robots has gathered the research community’s attention towards practical applications of swarm robotics. This paper presents the design of a novel platform for swarm robotics applications that is low cost, easy to assemble using off-the-shelf components, and deeply integrated with the most used robotic framework available today: ROS (Robot Operating System). The robotic platform is entirely open, composed of a 3D printed body and open-source software. We describe its architecture, present its main features, and evaluate its functionalities executing experiments using a couple of robots. Results demonstrate that the proposed mobile robot is capable of performing different swarm tasks, given its small size and reduced cost, being suitable for swarm robotics research and education.</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":"47 7","pages":"879 - 903"},"PeriodicalIF":3.5,"publicationDate":"2023-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91282746","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-14DOI: 10.1007/s10514-023-10091-y
Nicolás Navarro-Guerrero, Sibel Toprak, Josip Josifovski, Lorenzo Jamone
The object perception capabilities of humans are impressive, and this becomes even more evident when trying to develop solutions with a similar proficiency in autonomous robots. While there have been notable advancements in the technologies for artificial vision and touch, the effective integration of these two sensory modalities in robotic applications still needs to be improved, and several open challenges exist. Taking inspiration from how humans combine visual and haptic perception to perceive object properties and drive the execution of manual tasks, this article summarises the current state of the art of visuo-haptic object perception in robots. Firstly, the biological basis of human multimodal object perception is outlined. Then, the latest advances in sensing technologies and data collection strategies for robots are discussed. Next, an overview of the main computational techniques is presented, highlighting the main challenges of multimodal machine learning and presenting a few representative articles in the areas of robotic object recognition, peripersonal space representation and manipulation. Finally, informed by the latest advancements and open challenges, this article outlines promising new research directions.
{"title":"Visuo-haptic object perception for robots: an overview","authors":"Nicolás Navarro-Guerrero, Sibel Toprak, Josip Josifovski, Lorenzo Jamone","doi":"10.1007/s10514-023-10091-y","DOIUrl":"10.1007/s10514-023-10091-y","url":null,"abstract":"<div><p>The object perception capabilities of humans are impressive, and this becomes even more evident when trying to develop solutions with a similar proficiency in autonomous robots. While there have been notable advancements in the technologies for artificial vision and touch, the effective integration of these two sensory modalities in robotic applications still needs to be improved, and several open challenges exist. Taking inspiration from how humans combine visual and haptic perception to perceive object properties and drive the execution of manual tasks, this article summarises the current state of the art of visuo-haptic object perception in robots. Firstly, the biological basis of human multimodal object perception is outlined. Then, the latest advances in sensing technologies and data collection strategies for robots are discussed. Next, an overview of the main computational techniques is presented, highlighting the main challenges of multimodal machine learning and presenting a few representative articles in the areas of robotic object recognition, peripersonal space representation and manipulation. Finally, informed by the latest advancements and open challenges, this article outlines promising new research directions.</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":"47 4","pages":"377 - 403"},"PeriodicalIF":3.5,"publicationDate":"2023-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10514-023-10091-y.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46918377","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-02DOI: 10.1007/s10514-023-10085-w
Tim Yuqing Tang, Daniele De Martini, Paul Newman
In this paper, we present a method for solving the localisation of a ground lidar using overhead imagery only. Public overhead imagery such as Google satellite images are readily available resources. They can be used as the map proxy for robot localisation, relaxing the requirement for a prior traversal for mapping as in traditional approaches. While prior approaches have focused on the metric localisation between range sensors and overhead imagery, our method is the first to learn both place recognition and metric localisation of a ground lidar using overhead imagery, and also outperforms prior methods on metric localisation with large initial pose offsets. To bridge the drastic domain gap between lidar data and overhead imagery, our method learns to transform an overhead image into a collection of 2D points, emulating the resulting point-cloud scanned by a lidar sensor situated near the centre of the overhead image. After both modalities are expressed as point sets, point-based machine learning methods for localisation are applied.
{"title":"Point-based metric and topological localisation between lidar and overhead imagery","authors":"Tim Yuqing Tang, Daniele De Martini, Paul Newman","doi":"10.1007/s10514-023-10085-w","DOIUrl":"10.1007/s10514-023-10085-w","url":null,"abstract":"<div><p>In this paper, we present a method for solving the localisation of a ground lidar using overhead imagery only. Public overhead imagery such as Google satellite images are readily available resources. They can be used as the map proxy for robot localisation, relaxing the requirement for a prior traversal for mapping as in traditional approaches. While prior approaches have focused on the metric localisation between range sensors and overhead imagery, our method is the first to learn both place recognition and metric localisation of a ground lidar using overhead imagery, and also outperforms prior methods on metric localisation with large initial pose offsets. To bridge the drastic domain gap between lidar data and overhead imagery, our method learns to transform an overhead image into a collection of 2D points, emulating the resulting point-cloud scanned by a lidar sensor situated near the centre of the overhead image. After both modalities are expressed as point sets, point-based machine learning methods for localisation are applied.\u0000</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":"47 5","pages":"595 - 615"},"PeriodicalIF":3.5,"publicationDate":"2023-03-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10514-023-10085-w.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45125784","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-02-28DOI: 10.1007/s10514-023-10092-x
Maximilian Gießler, Bernd Waltersberger
Maintaining stability while walking on arbitrary surfaces or dealing with external perturbations is of great interest in humanoid robotics research. Increasing the system’s autonomous robustness to a variety of postural threats during locomotion is the key despite the need to evaluate noisy sensor signals. The equations of motion are the foundation of all published approaches. In contrast, we propose a more adequate evaluation of the equations of motion with respect to an arbitrary moving reference point in a non-inertial reference frame. Conceptual advantages are, e.g., getting independent of global position and velocity vectors estimated by sensor fusions or calculating the imaginary zero-moment point walking on different inclined ground surfaces. Further, we improve the calculation results by reducing noise-amplifying methods in our algorithm and using specific characteristics of physical robots. We use simulation results to compare our algorithm with established approaches and test it with experimental robot data.
{"title":"Robust inverse dynamics by evaluating Newton–Euler equations with respect to a moving reference and measuring angular acceleration","authors":"Maximilian Gießler, Bernd Waltersberger","doi":"10.1007/s10514-023-10092-x","DOIUrl":"10.1007/s10514-023-10092-x","url":null,"abstract":"<div><p>Maintaining stability while walking on arbitrary surfaces or dealing with external perturbations is of great interest in humanoid robotics research. Increasing the system’s autonomous robustness to a variety of postural threats during locomotion is the key despite the need to evaluate noisy sensor signals. The equations of motion are the foundation of all published approaches. In contrast, we propose a more adequate evaluation of the equations of motion with respect to an arbitrary moving reference point in a non-inertial reference frame. Conceptual advantages are, e.g., getting independent of global position and velocity vectors estimated by sensor fusions or calculating the imaginary zero-moment point walking on different inclined ground surfaces. Further, we improve the calculation results by reducing noise-amplifying methods in our algorithm and using specific characteristics of physical robots. We use simulation results to compare our algorithm with established approaches and test it with experimental robot data.</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":"47 4","pages":"465 - 481"},"PeriodicalIF":3.5,"publicationDate":"2023-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10514-023-10092-x.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47285445","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-02-25DOI: 10.1007/s10514-023-10084-x
Pouria Razzaghi, Ehab Al Khatib, Yildirim Hurmuzlu
Small-size robots offer access to spaces that are inaccessible to larger ones. This type of access is crucial in applications such as drug delivery, environmental detection, and collection of small samples. However, there are some tasks that are not possible to perform using only one robot including assembly and manufacturing at small scales, manipulation of micro- and nano- objects, and robot-based structuring of small-scale materials. In this article, we focus on tasks that can be achieved using a group of small-scale robots like pattern formation. These robots are typically externally actuated due to their size limitation. Yet, one faces the challenge of controlling a group of robots using a single global input. In this study, we propose a control algorithm to position individual members of a group in predefined positions. In our previous work, we presented a small-scaled magnetically actuated millirobot. An electromagnetic coil system applied external force and steered the millirobots in various modes of motion such as pivot walking and tumbling. In this paper, we propose two new designs of these millirobots. In the first design, the magnets are placed at the center of body to reduce the magnetic attraction force between the millirobots. In the second design, the millirobots are of identical length with two extra legs acting as the pivot points and varying pivot separation in design to take advantage of variable speed in pivot walking mode while keeping the speed constant in tumbling mode. This paper presents an algorithm for positional control of n millirobots with different lengths to move them from given initial positions to final desired ones. This method is based on choosing a leader that is fully controllable. Then, the motions of other millirobots are regulated by following the leader and determining their appropriate pivot separations in order to implement the intended group motion. Simulations and hardware experiments validate these results.
{"title":"Automated group motion control of magnetically actuated millirobots","authors":"Pouria Razzaghi, Ehab Al Khatib, Yildirim Hurmuzlu","doi":"10.1007/s10514-023-10084-x","DOIUrl":"10.1007/s10514-023-10084-x","url":null,"abstract":"<div><p>Small-size robots offer access to spaces that are inaccessible to larger ones. This type of access is crucial in applications such as drug delivery, environmental detection, and collection of small samples. However, there are some tasks that are not possible to perform using only one robot including assembly and manufacturing at small scales, manipulation of micro- and nano- objects, and robot-based structuring of small-scale materials. In this article, we focus on tasks that can be achieved using a group of small-scale robots like pattern formation. These robots are typically externally actuated due to their size limitation. Yet, one faces the challenge of controlling a group of robots using a single global input. In this study, we propose a control algorithm to position individual members of a group in predefined positions. In our previous work, we presented a small-scaled magnetically actuated millirobot. An electromagnetic coil system applied external force and steered the millirobots in various modes of motion such as pivot walking and tumbling. In this paper, we propose two new designs of these millirobots. In the first design, the magnets are placed at the center of body to reduce the magnetic attraction force between the millirobots. In the second design, the millirobots are of identical length with two extra legs acting as the pivot points and varying pivot separation in design to take advantage of variable speed in pivot walking mode while keeping the speed constant in tumbling mode. This paper presents an algorithm for positional control of <i>n</i> millirobots with different lengths to move them from given initial positions to final desired ones. This method is based on choosing a leader that is fully controllable. Then, the motions of other millirobots are regulated by following the leader and determining their appropriate pivot separations in order to implement the intended group motion. Simulations and hardware experiments validate these results.</p></div>","PeriodicalId":55409,"journal":{"name":"Autonomous Robots","volume":"47 7","pages":"865 - 877"},"PeriodicalIF":3.5,"publicationDate":"2023-02-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10514-023-10084-x.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44859563","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}