Pub Date : 2026-02-12eCollection Date: 2025-01-01DOI: 10.3389/frobt.2025.1734564
Yashwanthi Anand, Nnamdi Nwagwu, Kevin Sabbe, Naomi T Fitter, Sandhya Saisubramanian
Learning from human feedback is a popular approach to train robots to adapt to user preferences and improve safety. Existing approaches typically consider a single querying (interaction) format when seeking human feedback and do not leverage multiple modes of user interaction with a robot. We examine how to learn a penalty function associated with unsafe behaviors using multiple forms of human feedback, by optimizing both the query state and feedback format. Our proposed adaptive feedback selection is an iterative, two-phase approach which first selects critical states for querying, and then uses information gain to select a feedback format for querying across the sampled critical states. The feedback format selection also accounts for the cost and probability of receiving feedback in a certain format. Our experiments in simulation demonstrate the sample efficiency of our approach in learning to avoid undesirable behaviors. The results of our user study with a physical robot highlight the practicality and effectiveness of adaptive feedback selection in seeking informative, user-aligned feedback that accelerate learning. Experiment videos, code and supplementary materials are found on our website: https://tinyurl.com/AFS-learning.
{"title":"Adaptive querying for reward learning from human feedback.","authors":"Yashwanthi Anand, Nnamdi Nwagwu, Kevin Sabbe, Naomi T Fitter, Sandhya Saisubramanian","doi":"10.3389/frobt.2025.1734564","DOIUrl":"https://doi.org/10.3389/frobt.2025.1734564","url":null,"abstract":"<p><p>Learning from human feedback is a popular approach to train robots to adapt to user preferences and improve safety. Existing approaches typically consider a single querying (interaction) format when seeking human feedback and do not leverage multiple modes of user interaction with a robot. We examine how to learn a penalty function associated with unsafe behaviors using <i>multiple</i> forms of human feedback, by optimizing both the <i>query state</i> and <i>feedback format</i>. Our proposed <i>adaptive feedback selection</i> is an iterative, two-phase approach which first selects critical states for querying, and then uses information gain to select a feedback format for querying across the sampled critical states. The feedback format selection also accounts for the cost and probability of receiving feedback in a certain format. Our experiments in simulation demonstrate the sample efficiency of our approach in learning to avoid undesirable behaviors. The results of our user study with a physical robot highlight the practicality and effectiveness of adaptive feedback selection in seeking informative, user-aligned feedback that accelerate learning. Experiment videos, code and supplementary materials are found on our website: https://tinyurl.com/AFS-learning.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1734564"},"PeriodicalIF":3.0,"publicationDate":"2026-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12935605/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147327734","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mine emergencies demand rapid and informed decision-making under extreme conditions, often placing personnel in life-threatening situations. Robotic assistance offers the potential to reduce unnecessary human exposure during such operations. This study examines the specific informational needs and communication preferences of mine rescue personnel for designing robotic systems for underground emergency response. A semi-structured interview was developed and conducted with ten mine rescue personnel and subject matter experts (SMEs). Responses were analyzed using thematic analysis and compared with established cognitive models to derive key design recommendations. Drawing on both field experience and hypothetical rescue scenarios, participants provided insights into key functional aspects of robotic systems, including mapping and navigation, gas detection and environmental monitoring, communication capabilities, system reliability, control, and the robot's specific roles during operations. The qualitative data was transcribed and analyzed to identify recurring themes and critical user guidelines. The findings revealed insights into the informational and interface recommendations of rescue teams, particularly the need for real-time situational data and customizable human-robot interfaces tailored to emergency scenarios. These results expose key deficiencies in the current human-robot interaction systems and offer actionable guidance for designing robotic technologies that better align with the operational needs of experienced responders. The outcomes of this study can serve as practical guidelines for developing effective interfaces to support underground mine rescue missions.
{"title":"Underground mine rescue robotic systems: insights into human-robot information exchange.","authors":"Roya Bakzadeh, Rana Alhaj-Bedar, Sarah Wilson, Vasileios Androulakis, Hassan Khaniani, Sihua Shao, Mostafa Hassanalian, Pedram Roghanchi","doi":"10.3389/frobt.2026.1698570","DOIUrl":"https://doi.org/10.3389/frobt.2026.1698570","url":null,"abstract":"<p><p>Mine emergencies demand rapid and informed decision-making under extreme conditions, often placing personnel in life-threatening situations. Robotic assistance offers the potential to reduce unnecessary human exposure during such operations. This study examines the specific informational needs and communication preferences of mine rescue personnel for designing robotic systems for underground emergency response. A semi-structured interview was developed and conducted with ten mine rescue personnel and subject matter experts (SMEs). Responses were analyzed using thematic analysis and compared with established cognitive models to derive key design recommendations. Drawing on both field experience and hypothetical rescue scenarios, participants provided insights into key functional aspects of robotic systems, including mapping and navigation, gas detection and environmental monitoring, communication capabilities, system reliability, control, and the robot's specific roles during operations. The qualitative data was transcribed and analyzed to identify recurring themes and critical user guidelines. The findings revealed insights into the informational and interface recommendations of rescue teams, particularly the need for real-time situational data and customizable human-robot interfaces tailored to emergency scenarios. These results expose key deficiencies in the current human-robot interaction systems and offer actionable guidance for designing robotic technologies that better align with the operational needs of experienced responders. The outcomes of this study can serve as practical guidelines for developing effective interfaces to support underground mine rescue missions.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"13 ","pages":"1698570"},"PeriodicalIF":3.0,"publicationDate":"2026-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12932144/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147311229","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-10eCollection Date: 2026-01-01DOI: 10.3389/frobt.2026.1751222
David Howard
Reproducibility is a particular challenge for soft robotics, yet remains a core part of its development and maturation as a field. This perspective dives into reproducibility: what it is, what it means, and how it can be applied to soft robotics. We first discuss reproducibility and delineate why it is a critical consideration for the field. Following this, our core contributions are in defining three moonshot goals that collectively chart a path towards a reproducible future for soft robotics. First, methods for testing and sharing data are discussed. Second, we show how testing procedures from other scientific disciplines can provide broad coverage over different types of soft robotics tests that we might want to complete. Finally, we highlight the need for methods to quantitatively compare the embodied intelligence that lies at the heart of soft robotics research. If successful, these steps would put the field in an excellent position to develop into the future.
{"title":"Operationalising reproducibility in soft robotics.","authors":"David Howard","doi":"10.3389/frobt.2026.1751222","DOIUrl":"https://doi.org/10.3389/frobt.2026.1751222","url":null,"abstract":"<p><p>Reproducibility is a particular challenge for soft robotics, yet remains a core part of its development and maturation as a field. This perspective dives into reproducibility: what it is, what it means, and how it can be applied to soft robotics. We first discuss reproducibility and delineate why it is a critical consideration for the field. Following this, our core contributions are in defining three moonshot goals that collectively chart a path towards a reproducible future for soft robotics. First, methods for testing and sharing data are discussed. Second, we show how testing procedures from other scientific disciplines can provide broad coverage over different types of soft robotics tests that we might want to complete. Finally, we highlight the need for methods to quantitatively compare the embodied intelligence that lies at the heart of soft robotics research. If successful, these steps would put the field in an excellent position to develop into the future.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"13 ","pages":"1751222"},"PeriodicalIF":3.0,"publicationDate":"2026-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12929158/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147291499","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-10eCollection Date: 2026-01-01DOI: 10.3389/frobt.2026.1734848
Tsubasa Wakatsuki, Norimasa Yamada
Morphological computation (MC)-the idea that body mechanics contribute to computation-has been widely explored in robotics and examined in humans from a physiological perspective. In this study, we report a behavioral pattern consistent with MC under temporal uncertainty. This proof-of-concept single-subject study examined whether human motor control shows behavioral signatures consistent with MC within a temporal-preparation paradigm. One participant completed 160 trials across four entropy levels (0, 1.0, 1.5, 2.0 bits) in two tasks: a low-embodiment button-pressing movement and a high-embodiment reaching movement. The reaching movement tended to show decreasing response variability (coefficient of variation, CV) with increasing temporal uncertainty, whereas the button-pressing movement tended to remain flat or slightly increase. Reaction time (RT) patterns also diverged: RTs tended to lengthen with longer foreperiods in the reaching condition but shortened in the button-pressing movement. Moreover, spatial accuracy in the reaching movement tended to improve across foreperiods. These adaptations emerged without explicit strategy instructions, may reflect sensitivity to temporal context. Taken together, these patterns appear consistent with MC-inspired accounts in which limb mechanics and modest co-contraction may filter temporal uncertainty rather than amplify it. Although constrained by a single-subject, four-level design, the findings offer preliminary evidence that is suggestive of embodied-intelligence principles that may generalize to human motor control, highlighting commonalities between biological and robotic systems in brain-body-environment dynamics.
{"title":"Entropy-dependent human motor modulation consistent with morphological computation in a single subject.","authors":"Tsubasa Wakatsuki, Norimasa Yamada","doi":"10.3389/frobt.2026.1734848","DOIUrl":"https://doi.org/10.3389/frobt.2026.1734848","url":null,"abstract":"<p><p>Morphological computation (MC)-the idea that body mechanics contribute to computation-has been widely explored in robotics and examined in humans from a physiological perspective. In this study, we report a behavioral pattern consistent with MC under temporal uncertainty. This proof-of-concept single-subject study examined whether human motor control shows behavioral signatures consistent with MC within a temporal-preparation paradigm. One participant completed 160 trials across four entropy levels (0, 1.0, 1.5, 2.0 bits) in two tasks: a low-embodiment button-pressing movement and a high-embodiment reaching movement. The reaching movement tended to show decreasing response variability (coefficient of variation, CV) with increasing temporal uncertainty, whereas the button-pressing movement tended to remain flat or slightly increase. Reaction time (RT) patterns also diverged: RTs tended to lengthen with longer foreperiods in the reaching condition but shortened in the button-pressing movement. Moreover, spatial accuracy in the reaching movement tended to improve across foreperiods. These adaptations emerged without explicit strategy instructions, may reflect sensitivity to temporal context. Taken together, these patterns appear consistent with MC-inspired accounts in which limb mechanics and modest co-contraction may filter temporal uncertainty rather than amplify it. Although constrained by a single-subject, four-level design, the findings offer preliminary evidence that is suggestive of embodied-intelligence principles that may generalize to human motor control, highlighting commonalities between biological and robotic systems in brain-body-environment dynamics.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"13 ","pages":"1734848"},"PeriodicalIF":3.0,"publicationDate":"2026-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12929133/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147311247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-09eCollection Date: 2026-01-01DOI: 10.3389/frobt.2026.1735467
Dickson Chiu Yu Wong, Zheng H Zhu
This paper addresses the challenge of detecting and recovering from slip during robotic grasping of unknown objects, with the objective of establishing a robust no on-site or per-object calibration slip-recovery controller for an anthropomorphic hand. This hand is equipped with tri-axial piezoresistive tactile force sensors on each finger, and the proposed approach is validated through experimental analysis. The proposed methodology eliminates the need for object- or pose-specific calibration, explicit friction modelling, dense tactile arrays, line-of-sight vision, and a data-hungry learning process, enabling real-time implementation with minimal computation and integration effort. Using a commonly acquired online baseline from initial readings, slip is detected from relative changes between consecutive samples of the baseline-subtracted resultant tangential force, and object engagement is determined when the normal force reading deviates from a no-slip baseline beyond a preset threshold. Upon detecting slip, each finger increases its gripping force in closed-loop control until the slip stops, while enforcing motor-current protection in finger control to prevent actuator overload and object damage. Experiments were conducted on objects with different rigidity, weight, and surface textures, including an aluminium tube, a plastic water bottle, and a sponge. Additionally, the response time and variations in gripping force were evaluated. The results demonstrate rapid slip response via localized per-finger correction, good object conformability, and effective re-stabilization under different lifting speeds and sudden external disturbances. The per-finger design utilizes the minimum necessary correction at the offending finger, reducing unnecessary force increases on other fingers and improving grasp efficiency. This approach represents a practical solution for warehouse picking, human-robot collaboration, and in situ manipulation where task-specific calibrations, visual access, or training datasets are impractical.
{"title":"Calibration-free per-finger force-feedback slip control for grasping by anthropomorphic hand with tri-axial tactile sensors.","authors":"Dickson Chiu Yu Wong, Zheng H Zhu","doi":"10.3389/frobt.2026.1735467","DOIUrl":"https://doi.org/10.3389/frobt.2026.1735467","url":null,"abstract":"<p><p>This paper addresses the challenge of detecting and recovering from slip during robotic grasping of unknown objects, with the objective of establishing a robust no on-site or per-object calibration slip-recovery controller for an anthropomorphic hand. This hand is equipped with tri-axial piezoresistive tactile force sensors on each finger, and the proposed approach is validated through experimental analysis. The proposed methodology eliminates the need for object- or pose-specific calibration, explicit friction modelling, dense tactile arrays, line-of-sight vision, and a data-hungry learning process, enabling real-time implementation with minimal computation and integration effort. Using a commonly acquired online baseline from initial readings, slip is detected from relative changes between consecutive samples of the baseline-subtracted resultant tangential force, and object engagement is determined when the normal force reading deviates from a no-slip baseline beyond a preset threshold. Upon detecting slip, each finger increases its gripping force in closed-loop control until the slip stops, while enforcing motor-current protection in finger control to prevent actuator overload and object damage. Experiments were conducted on objects with different rigidity, weight, and surface textures, including an aluminium tube, a plastic water bottle, and a sponge. Additionally, the response time and variations in gripping force were evaluated. The results demonstrate rapid slip response via localized per-finger correction, good object conformability, and effective re-stabilization under different lifting speeds and sudden external disturbances. The per-finger design utilizes the minimum necessary correction at the offending finger, reducing unnecessary force increases on other fingers and improving grasp efficiency. This approach represents a practical solution for warehouse picking, human-robot collaboration, and <i>in situ</i> manipulation where task-specific calibrations, visual access, or training datasets are impractical.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"13 ","pages":"1735467"},"PeriodicalIF":3.0,"publicationDate":"2026-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12926654/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147285918","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-06eCollection Date: 2026-01-01DOI: 10.3389/frobt.2026.1785247
Hammad Nazeer, Farzan M Noori, Rayyan Azam Khan
{"title":"Editorial: Integrative approaches with BCI and robotics for improved human interaction.","authors":"Hammad Nazeer, Farzan M Noori, Rayyan Azam Khan","doi":"10.3389/frobt.2026.1785247","DOIUrl":"https://doi.org/10.3389/frobt.2026.1785247","url":null,"abstract":"","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"13 ","pages":"1785247"},"PeriodicalIF":3.0,"publicationDate":"2026-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12921480/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147272509","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cloth unfolding and folding are fundamental tasks in autonomous robotic cloth manipulation as Physical AI. Driven by recent advances in deep learning, this area has developed rapidly in recent years. This review aims to systematically identify and summarize current progress in deep learning-based cloth unfolding and folding. Following the Systematic Reviews and Meta-Analyses (PRISMA) guidelines, 41 relevant papers from 2019 to 2024 were selected for analysis. We examines various factors influencing cloth manipulation and find that, while current methods show impressive performance, several challenges remain unaddressed. These challenges include irregular cloth sizes and diverse initial garment states. Concerning datasets, there is a need for improved real-world data collection systems and more realistic cloth simulators, and the Sim2Real gap must be carefully considered. Additionally, the review highlights the importance of incorporating multi-modal sensors into current platforms and the emergence of novel primitive actions that enhance performance. The need for more consistent comparison metrics is emphasized, and strategies for addressing failure modes are discussed to further advance the field. From an algorithmic perspective, we reorganize existing learning methods into six learning and control paradigms: perception-guided heuristics, goal-conditioned manipulation policies, predictive and model-based state representation methods, reward-driven reinforcement learning over primitive actions, demonstration-driven skill transfer methods, and emerging large language model-based planning methods. We discuss how each paradigm contributes to unfolding and folding, their respective strengths and limitations, and the open problems that arise. Finally, we summarize the remaining challenges and provide future perspectives for physical AI.
{"title":"Deep learning-based robotic cloth manipulation applications: systematic review, challenges and opportunities for physical AI.","authors":"Ningquan Gu, Mitsuhiro Hayashibe, Kyo Kutsuzawa, Hui Yu","doi":"10.3389/frobt.2026.1752914","DOIUrl":"https://doi.org/10.3389/frobt.2026.1752914","url":null,"abstract":"<p><p>Cloth unfolding and folding are fundamental tasks in autonomous robotic cloth manipulation as Physical AI. Driven by recent advances in deep learning, this area has developed rapidly in recent years. This review aims to systematically identify and summarize current progress in deep learning-based cloth unfolding and folding. Following the Systematic Reviews and Meta-Analyses (PRISMA) guidelines, 41 relevant papers from 2019 to 2024 were selected for analysis. We examines various factors influencing cloth manipulation and find that, while current methods show impressive performance, several challenges remain unaddressed. These challenges include irregular cloth sizes and diverse initial garment states. Concerning datasets, there is a need for improved real-world data collection systems and more realistic cloth simulators, and the Sim2Real gap must be carefully considered. Additionally, the review highlights the importance of incorporating multi-modal sensors into current platforms and the emergence of novel primitive actions that enhance performance. The need for more consistent comparison metrics is emphasized, and strategies for addressing failure modes are discussed to further advance the field. From an algorithmic perspective, we reorganize existing learning methods into six learning and control paradigms: perception-guided heuristics, goal-conditioned manipulation policies, predictive and model-based state representation methods, reward-driven reinforcement learning over primitive actions, demonstration-driven skill transfer methods, and emerging large language model-based planning methods. We discuss how each paradigm contributes to unfolding and folding, their respective strengths and limitations, and the open problems that arise. Finally, we summarize the remaining challenges and provide future perspectives for physical AI.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"13 ","pages":"1752914"},"PeriodicalIF":3.0,"publicationDate":"2026-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12921407/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147272523","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Robot failures in Human-Robot Interaction (HRI), though often stemming from technical limitations, can have severe effects on the interactional dynamics between humans and robots. Prior empirical research has led to conflicting findings on how such failures influence user perceptions and the overall success of the interaction. In this study, we investigate how human participants respond to robot failures on a moment-to-moment basis, with a particular focus on how social roles, responsibilities, and agency are negotiated as these episodes unfold. We examine how responses and helping behaviors are instantiated, and which factors facilitate or hinder recovery strategies. We focus on kinematic failures, such as interruptions in motion, unsuccessful grasping, or dropping objects, that occurred during Tic-Tac-Toe games between human participants (n = 17) and the humanoid robot Epi. Our analysis combines multimodal conversation analysis (MCA) and thick description, drawing on our interdisciplinary backgrounds in cognitive science and feminist Science and Technology Studies (STS). We present selected interactional sequences that illustrate a range of participant responses, including physical repair and scaffolding, interpretive support, emotional care, sustained monitoring, and dynamic negotiation of agency. These observations demonstrate how humans co-construct interactional continuity and robot competence through distributed, multimodal, and affective forms of help. They also reveal how agency is dynamically reconfigured, and how roles and responsibilities are distributed across human and robotic actors. We show how the burden of repair often falls to the human participant and conclude by reflecting on the setting and methods used, specifically in regards to the role of the robot as a research tool.
{"title":"Helping or watching it happen: how participants respond to robot failures in a turn-taking game.","authors":"Samantha Stedtler, Katherine Harrison, Valentina Fantasia","doi":"10.3389/frobt.2025.1664334","DOIUrl":"https://doi.org/10.3389/frobt.2025.1664334","url":null,"abstract":"<p><p>Robot failures in Human-Robot Interaction (HRI), though often stemming from technical limitations, can have severe effects on the interactional dynamics between humans and robots. Prior empirical research has led to conflicting findings on how such failures influence user perceptions and the overall success of the interaction. In this study, we investigate how human participants respond to robot failures on a moment-to-moment basis, with a particular focus on how social roles, responsibilities, and agency are negotiated as these episodes unfold. We examine how responses and helping behaviors are instantiated, and which factors facilitate or hinder recovery strategies. We focus on kinematic failures, such as interruptions in motion, unsuccessful grasping, or dropping objects, that occurred during Tic-Tac-Toe games between human participants (n = 17) and the humanoid robot Epi. Our analysis combines multimodal conversation analysis (MCA) and thick description, drawing on our interdisciplinary backgrounds in cognitive science and feminist Science and Technology Studies (STS). We present selected interactional sequences that illustrate a range of participant responses, including physical repair and scaffolding, interpretive support, emotional care, sustained monitoring, and dynamic negotiation of agency. These observations demonstrate how humans co-construct interactional continuity and robot competence through distributed, multimodal, and affective forms of help. They also reveal how agency is dynamically reconfigured, and how roles and responsibilities are distributed across human and robotic actors. We show how the burden of repair often falls to the human participant and conclude by reflecting on the setting and methods used, specifically in regards to the role of the robot as a research tool.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1664334"},"PeriodicalIF":3.0,"publicationDate":"2026-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12921409/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147272478","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-05eCollection Date: 2026-01-01DOI: 10.3389/frobt.2026.1749105
Henk H A Jekel, Alejandro Díaz Rosales, Luka Peternel
The paper presents a visio-verbal teleimpedance interface for commanding 3D stiffness ellipsoids to the remote robot with a combination of the operator's gaze and verbal interaction. The gaze is detected by an eye-tracker, allowing the system to understand the context in terms of what the operator is currently looking at in the scene. Along with verbal interaction, a Vision-Language Model (VLM) processes this information, enabling the operator to communicate their intended action or provide corrections. Based on these inputs, the interface can then generate appropriate stiffness matrices for different physical interaction actions. To validate the proposed visio-verbal teleimpedance interface, we conducted a series of experiments on a setup including a Force Dimension Sigma.7 haptic device to control the motion of the remote Kuka LBR iiwa robotic arm. The human operator's gaze is tracked by Tobii Pro Glasses 2, while human verbal commands are processed by a VLM using GPT-4o. The first experiment explored the optimal prompt configuration for the interface. The second and third experiments demonstrated different functionalities of the interface on a slide-in-the-groove task.
{"title":"Visio-verbal teleimpedance interface: enabling semi-autonomous control of physical interaction via eye tracking and speech.","authors":"Henk H A Jekel, Alejandro Díaz Rosales, Luka Peternel","doi":"10.3389/frobt.2026.1749105","DOIUrl":"https://doi.org/10.3389/frobt.2026.1749105","url":null,"abstract":"<p><p>The paper presents a visio-verbal teleimpedance interface for commanding 3D stiffness ellipsoids to the remote robot with a combination of the operator's gaze and verbal interaction. The gaze is detected by an eye-tracker, allowing the system to understand the context in terms of what the operator is currently looking at in the scene. Along with verbal interaction, a Vision-Language Model (VLM) processes this information, enabling the operator to communicate their intended action or provide corrections. Based on these inputs, the interface can then generate appropriate stiffness matrices for different physical interaction actions. To validate the proposed visio-verbal teleimpedance interface, we conducted a series of experiments on a setup including a Force Dimension Sigma.7 haptic device to control the motion of the remote Kuka LBR iiwa robotic arm. The human operator's gaze is tracked by Tobii Pro Glasses 2, while human verbal commands are processed by a VLM using GPT-4o. The first experiment explored the optimal prompt configuration for the interface. The second and third experiments demonstrated different functionalities of the interface on a slide-in-the-groove task.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"13 ","pages":"1749105"},"PeriodicalIF":3.0,"publicationDate":"2026-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12926544/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147285945","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-05eCollection Date: 2025-01-01DOI: 10.3389/frobt.2025.1691576
Faria Jaheen, Vinod Gutta, Pascal Fallavollita
This work presents a machine learning driven framework for data-efficient kinematic modeling and workspace optimization in modular C-arm fluoroscopy systems integrated with operating tables. A comprehensive dataset of joint configurations and end-effector poses annotated with voxelized collision status enables the training of predictive models across multiple system configurations ranging from 5 to 9 degrees of freedom. Leveraging expansive simulation-derived datasets, as well as clinical assessment through simulated X-ray generation, the models are trained and validated, achieving sub-millimetric positional accuracy and sub-degree angular precision while delivering real-time inference that surpasses conventional methods in scalability, robustness, and computational latency. The proposed framework demonstrates the viability of data-driven trajectory planning in multi-degree of freedom C-arm systems, providing a clinically relevant solution for improving imaging access and reducing intraoperative collision risks.
{"title":"Modelling C-arm fluoroscopy and operating table kinematics via machine learning.","authors":"Faria Jaheen, Vinod Gutta, Pascal Fallavollita","doi":"10.3389/frobt.2025.1691576","DOIUrl":"https://doi.org/10.3389/frobt.2025.1691576","url":null,"abstract":"<p><p>This work presents a machine learning driven framework for data-efficient kinematic modeling and workspace optimization in modular C-arm fluoroscopy systems integrated with operating tables. A comprehensive dataset of joint configurations and end-effector poses annotated with voxelized collision status enables the training of predictive models across multiple system configurations ranging from 5 to 9 degrees of freedom. Leveraging expansive simulation-derived datasets, as well as clinical assessment through simulated X-ray generation, the models are trained and validated, achieving sub-millimetric positional accuracy and sub-degree angular precision while delivering real-time inference that surpasses conventional methods in scalability, robustness, and computational latency. The proposed framework demonstrates the viability of data-driven trajectory planning in multi-degree of freedom C-arm systems, providing a clinically relevant solution for improving imaging access and reducing intraoperative collision risks.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1691576"},"PeriodicalIF":3.0,"publicationDate":"2026-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12917506/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147272542","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}