[This corrects the article DOI: 10.3389/frobt.2024.1451231.].
[This corrects the article DOI: 10.3389/frobt.2024.1451231.].
Inferential decision-making algorithms typically assume that an underlying probabilistic model of decision alternatives and outcomes may be learned a priori or online. Furthermore, when applied to robots in real-world settings they often perform unsatisfactorily or fail to accomplish the necessary tasks because this assumption is violated and/or because they experience unanticipated external pressures and constraints. Cognitive studies presented in this and other papers show that humans cope with complex and unknown settings by modulating between near-optimal and satisficing solutions, including heuristics, by leveraging information value of available environmental cues that are possibly redundant. Using the benchmark inferential decision problem known as "treasure hunt", this paper develops a general approach for investigating and modeling active perception solutions under pressure. By simulating treasure hunt problems in virtual worlds, our approach learns generalizable strategies from high performers that, when applied to robots, allow them to modulate between optimal and heuristic solutions on the basis of external pressures and probabilistic models, if and when available. The result is a suite of active perception algorithms for camera-equipped robots that outperform treasure-hunt solutions obtained via cell decomposition, information roadmap, and information potential algorithms, in both high-fidelity numerical simulations and physical experiments. The effectiveness of the new active perception strategies is demonstrated under a broad range of unanticipated conditions that cause existing algorithms to fail to complete the search for treasures, such as unmodelled time constraints, resource constraints, and adverse weather (fog).
This paper presents a novel optimal trajectory-shaping control concept for a planetary hopping robot. The hopping robot suffers from uncontrolled in-flight and undesired after-landing motions, leading to a position drift at landing. The proposed concept thrives on the Generalized Vector Explicit (GENEX) guidance, which can generate and shape the optimal trajectory and satisfy the end-point constraints like the impact angle of the velocity vector. The proposed concept is used for a thruster-based hopping robot, which achieves a range of impact angles, reduces the position drift at landing due to the undesired in-flight and after-landing motions, and handles the error in initial hopping angles. The proposed approach's conceptual realization is illustrated by lateral acceleration generated using thruster orientation control. Extensive simulations are carried out on horizontal and sloped surfaces with different initial and impact angle conditions to demonstrate the effect of impact angle on the position drift error and the viability of the proposed approach.
Tactile information acquired through palpation plays a crucial role in relation to surface characterisation and tissue differentiation - an essential clinical requirement during surgery. In the case of Minimally Invasive Surgery, access is restricted, and tactile feedback available to surgeons is therefore reduced. This paper presents a novel stiffness controllable, dynamic force range sensor that can provide remote haptic feedback. The sensor has an abraded optical fibre integrated into a silicone dome. Forces applied to the dome change the curvature of the optical fibres, resulting in light attenuation. By changing the pressure within the dome and thereby adjusting the sensor's stiffness, we are able to modify the force measurement range. Results from our experimental study demonstrate that increasing the pressure inside the dome increases the force range whilst decreasing force sensitivity. We show that the maximum force measured by our sensor prototype at 20 mm/min was 5.02 N, 6.70 N and 8.83 N for the applied pressures of 0 psi (0 kPa), 0.5 psi (3.45 kPa) and 1 psi (6.9 kPa), respectively. The sensor has also been tested to estimate the stiffness of 13 phantoms of different elastic moduli. Results show the elastic modulus sensing range of the proposed sensor to be from 8.58 to 165.32 kPa.
Conversational user interfaces have transformed human-computer interaction by providing nearly real-time responses to queries. However, misunderstandings between the user and system persist. This study explores the significance of interactional language in dialogue repair between virtual assistants and users by analyzing interactions with Google Assistant and Siri in both English and Spanish, focusing on the assistants' utilization and response to the colloquial other-initiated repair strategy "huh?", which is prevalent as a human-human dialogue repair strategy. Findings revealed ten distinct assistant-generated repair strategies, but an inability to replicate human-like strategies such as "huh?". Despite slight variations in user acceptability judgments among the two surveyed languages, results indicated an overall hierarchy of preference towards specific dialogue repair strategies, with a notable disparity between the most preferred strategies and those frequently used by the assistants. These findings highlight discrepancies in how interactional language is utilized in human-computer interaction, underscoring the need for further research on the impact of interactional elements among different languages to advance the development of conversational user interfaces across domains, including within human-robot interaction.
For certain tasks in logistics, especially bin picking and packing, humans resort to a strategy of grasping multiple objects simultaneously, thus reducing picking and transport time. In contrast, robotic systems mainly grasp only one object per picking action, which leads to inefficiencies that could be solved with a smarter gripping hardware and strategies. Development of new manipulators, robotic hands, hybrid or specialized grippers, can already consider such challenges for multi-object grasping in the design stages. This paper introduces different hardware solutions and tests possible grasp strategies for the simultaneous grasping of multiple objects (SGMO). The four hardware solutions presented here are: an under-actuated Constriction Gripper, Linear Scoop Gripper suitable for deform-able object grasping, Hybrid Compliant Gripper equipped with mini vacuum gripper on each fingertip, and a Two-finger Palm Hand with fingers optimized by simulation in pybullet for maximum in-hand manipulation workspace. Most of these hardware solutions are based on the DLR CLASH end-effector and have variable stiffness actuation, high impact robustness, small contact forces, and low-cost design. For the comparison of the capability to simultaneously grasp multiple objects and the capability to grasp a single delicate object in a cluttered environment, the manipulators are tested with four different objects in an extra designed benchmark. The results serve as guideline for future commercial applications of these strategies.
Human dexterity is an invaluable capability for precise manipulation of objects in complex tasks. The capability of robots to similarly grasp and perform in-hand manipulation of objects is critical for their use in the ever changing human environment, and for their ability to replace manpower. In recent decades, significant effort has been put in order to enable in-hand manipulation capabilities to robotic systems. Initial robotic manipulators followed carefully programmed paths, while later attempts provided a solution based on analytical modeling of motion and contact. However, these have failed to provide practical solutions due to inability to cope with complex environments and uncertainties. Therefore, the effort has shifted to learning-based approaches where data is collected from the real world or through a simulation, during repeated attempts to complete various tasks. The vast majority of learning approaches focused on learning data-based models that describe the system to some extent or Reinforcement Learning (RL). RL, in particular, has seen growing interest due to the remarkable ability to generate solutions to problems with minimal human guidance. In this survey paper, we track the developments of learning approaches for in-hand manipulations and, explore the challenges and opportunities. This survey is designed both as an introduction for novices in the field with a glossary of terms as well as a guide of novel advances for advanced practitioners.