Pub Date : 2026-01-05eCollection Date: 2025-01-01DOI: 10.3389/frobt.2025.1694338
Luisa Damiano, Antonio Fleres, Andrea Roli, Pasquale Stano
Wetware Network-Based Artificial Intelligence (WNAI) introduces a new approach to robotic cognition and artificial intelligence: autonomous cognitive agents built from synthetic chemical networks. Rooted in Wetware Neuromorphic Engineering, WNAI shifts the focus of this emerging field from disembodied computation and biological mimicry to reticular chemical self-organization as a substrate for cognition. At the theoretical level, WNAI integrates insights from network cybernetics, autopoietic theory and enaction to frame cognition as a materially grounded, emergent phenomenon. At the heuristic level, WNAI defines its role as complementary to existing leading approaches. On the one hand, it complements embodied AI and xenobotics by expanding the design space of artificial embodied cognition into fully synthetic domains. On the other hand, it engages in mutual exchange with neural network architectures, advancing cross-substrate principles of network-based cognition. At the technological level, WNAI offers a roadmap for implementing chemical neural networks and protocellular agents, with potential applications in robotic systems requiring minimal, adaptive, and substrate-sensitive intelligence. By situating wetware neuromorphic engineering within the broader landscape of robotics and AI, this article outlines a programmatic framework that highlights its potential to expand artificial cognition beyond silicon and biohybrid systems.
{"title":"Wetware network-based AI: a chemical approach to embodied cognition for robotics and artificial intelligence.","authors":"Luisa Damiano, Antonio Fleres, Andrea Roli, Pasquale Stano","doi":"10.3389/frobt.2025.1694338","DOIUrl":"10.3389/frobt.2025.1694338","url":null,"abstract":"<p><p><i>Wetware Network-Based Artificial Intelligence</i> (WNAI) introduces a new approach to robotic cognition and artificial intelligence: autonomous cognitive agents built from synthetic chemical networks. Rooted in <i>Wetware Neuromorphic Engineering</i>, WNAI shifts the focus of this emerging field from disembodied computation and biological mimicry to reticular chemical self-organization as a substrate for cognition. At the <i>theoretical level</i>, WNAI integrates insights from network cybernetics, autopoietic theory and enaction to frame cognition as a materially grounded, emergent phenomenon. At the <i>heuristic level</i>, WNAI defines its role as complementary to existing leading approaches. On the one hand, it complements embodied AI and xenobotics by expanding the design space of artificial embodied cognition into fully synthetic domains. On the other hand, it engages in mutual exchange with neural network architectures, advancing cross-substrate principles of network-based cognition. At the <i>technological level</i>, WNAI offers a roadmap for implementing chemical neural networks and protocellular agents, with potential applications in robotic systems requiring minimal, adaptive, and substrate-sensitive intelligence. By situating wetware neuromorphic engineering within the broader landscape of robotics and AI, this article outlines a programmatic framework that highlights its potential to expand artificial cognition beyond silicon and biohybrid systems.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1694338"},"PeriodicalIF":3.0,"publicationDate":"2026-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12812610/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146012858","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-02eCollection Date: 2025-01-01DOI: 10.3389/frobt.2025.1569040
Ahmed Salem, Kaoru Sumi
As robots became increasingly integrated into daily life, their ability to influence human emotions through verbal and nonverbal expressions is gaining attention. While robots have been explored for their role in emotional expression, their potential in emotion regulation particularly in mitigating or amplifying embarrassment remains under-explored in human-robot interaction. To address this gap, this study investigates whether and how robots can regulate the embarrassment emotion through their responses. A between-subjects experiment was conducted with 96 participants (48 males and 48 females) using the social robot Furhat. Participants experienced an embarrassing situation induced by a failure of meshing scenario, followed by the robot adopting one of three response attitudes: neutral, empathic, or ridiculing. Additionally, the robot's social agency was manipulated by varying its facial appearance between a human-like and an anime-like appearances. The findings indicate that embarrassment was effectively induced, as evidenced by physiological data, body movements, facial expressions, and participants' verbal responses. The anime-faced robot elicited lower embarrassment and arousal due to its lower perceived social agency and anthropomorphism. The robot's attitude was the dominant factor shaping participants' emotional responses and perceptions. The neutral and empathic attitudes with an anime face were found to be the most effective in eliciting mild emotions and mitigating embarrassment. Interestingly, an empathic attitude is suspected to be favored over a neutral one as it elicited the lowest embarrassment. However, an empathic attitude risks shaming the participant due to its indirect confrontation that inherently acknowledges the embarrassing incident which is undesirable in Japanese culture. Nevertheless, in terms of the robot's perceived evaluation by participants, a neutral attitude was the most favored. This study highlights the role of robot responses in emotion regulation, particularly in embarrassment control, and provides insights into designing socially intelligent robots that can modulate human emotions effectively.
{"title":"Embarrassment in HRI: remediation and the role of robot responses in emotion control.","authors":"Ahmed Salem, Kaoru Sumi","doi":"10.3389/frobt.2025.1569040","DOIUrl":"10.3389/frobt.2025.1569040","url":null,"abstract":"<p><p>As robots became increasingly integrated into daily life, their ability to influence human emotions through verbal and nonverbal expressions is gaining attention. While robots have been explored for their role in emotional expression, their potential in emotion regulation particularly in mitigating or amplifying embarrassment remains under-explored in human-robot interaction. To address this gap, this study investigates whether and how robots can regulate the embarrassment emotion through their responses. A between-subjects experiment was conducted with 96 participants (48 males and 48 females) using the social robot Furhat. Participants experienced an embarrassing situation induced by a failure of meshing scenario, followed by the robot adopting one of three response attitudes: neutral, empathic, or ridiculing. Additionally, the robot's social agency was manipulated by varying its facial appearance between a human-like and an anime-like appearances. The findings indicate that embarrassment was effectively induced, as evidenced by physiological data, body movements, facial expressions, and participants' verbal responses. The anime-faced robot elicited lower embarrassment and arousal due to its lower perceived social agency and anthropomorphism. The robot's attitude was the dominant factor shaping participants' emotional responses and perceptions. The neutral and empathic attitudes with an anime face were found to be the most effective in eliciting mild emotions and mitigating embarrassment. Interestingly, an empathic attitude is suspected to be favored over a neutral one as it elicited the lowest embarrassment. However, an empathic attitude risks shaming the participant due to its indirect confrontation that inherently acknowledges the embarrassing incident which is undesirable in Japanese culture. Nevertheless, in terms of the robot's perceived evaluation by participants, a neutral attitude was the most favored. This study highlights the role of robot responses in emotion regulation, particularly in embarrassment control, and provides insights into designing socially intelligent robots that can modulate human emotions effectively.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1569040"},"PeriodicalIF":3.0,"publicationDate":"2026-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12807912/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145999440","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-19eCollection Date: 2025-01-01DOI: 10.3389/frobt.2025.1708987
Matija Mavsar, Mihael Simonič, Aleš Ude
Collaboration between humans and robots is essential for optimizing the performance of complex tasks in industrial environments, reducing worker strain, and improving safety. This paper presents an integrated human-robot collaboration (HRC) system that leverages advanced intention recognition for real-time task sharing and interaction. By utilizing state-of-the-art human pose estimation combined with deep learning models, we developed a robust framework for detecting and predicting worker intentions. Specifically, we employed LSTM-based and transformer-based neural networks with convolutional and pooling layers to classify human hand trajectories, achieving higher accuracy compared to previous approaches. Additionally, our system integrates dynamic movement primitives (DMPs) for smooth robot motion transitions, collision prevention, and automatic motion onset/cessation detection. We validated the system in a real-world industrial assembly task, demonstrating its effectiveness in enhancing the fluency, safety, and efficiency of human-robot collaboration. The proposed method shows promise in improving real-time decision-making in collaborative environments, offering a safer and more intuitive interaction between humans and robots.
{"title":"Human intention recognition by deep LSTM and transformer networks for real-time human-robot collaboration.","authors":"Matija Mavsar, Mihael Simonič, Aleš Ude","doi":"10.3389/frobt.2025.1708987","DOIUrl":"10.3389/frobt.2025.1708987","url":null,"abstract":"<p><p>Collaboration between humans and robots is essential for optimizing the performance of complex tasks in industrial environments, reducing worker strain, and improving safety. This paper presents an integrated human-robot collaboration (HRC) system that leverages advanced intention recognition for real-time task sharing and interaction. By utilizing state-of-the-art human pose estimation combined with deep learning models, we developed a robust framework for detecting and predicting worker intentions. Specifically, we employed LSTM-based and transformer-based neural networks with convolutional and pooling layers to classify human hand trajectories, achieving higher accuracy compared to previous approaches. Additionally, our system integrates dynamic movement primitives (DMPs) for smooth robot motion transitions, collision prevention, and automatic motion onset/cessation detection. We validated the system in a real-world industrial assembly task, demonstrating its effectiveness in enhancing the fluency, safety, and efficiency of human-robot collaboration. The proposed method shows promise in improving real-time decision-making in collaborative environments, offering a safer and more intuitive interaction between humans and robots.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1708987"},"PeriodicalIF":3.0,"publicationDate":"2025-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12757248/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145900997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-19eCollection Date: 2025-01-01DOI: 10.3389/frobt.2025.1682437
Edgar Welte, Rania Rayyes
Dexterous manipulation is a crucial yet highly complex challenge in humanoid robotics, demanding precise, adaptable, and sample-efficient learning methods. As humanoid robots are usually designed to operate in human-centric environments and interact with everyday objects, mastering dexterous manipulation is critical for real-world deployment. Traditional approaches, such as reinforcement learning and imitation learning, have made significant strides, but they often struggle due to the unique challenges of real-world dexterous manipulation, including high-dimensional control, limited training data, and covariate shift. This survey provides a comprehensive overview of these challenges and reviews existing learning-based methods for real-world dexterous manipulation, spanning imitation learning, reinforcement learning, and hybrid approaches. A promising yet underexplored direction is interactive imitation learning, where human feedback actively refines a robot's behavior during training. While interactive imitation learning has shown success in various robotic tasks, its application to dexterous manipulation remains limited. To address this gap, we examine current interactive imitation learning techniques applied to other robotic tasks and discuss how these methods can be adapted to enhance dexterous manipulation. By synthesizing state-of-the-art research, this paper highlights key challenges, identifies gaps in current methodologies, and outlines potential directions for leveraging interactive imitation learning to improve dexterous robotic skills.
{"title":"Interactive imitation learning for dexterous robotic manipulation: challenges and perspectives-a survey.","authors":"Edgar Welte, Rania Rayyes","doi":"10.3389/frobt.2025.1682437","DOIUrl":"10.3389/frobt.2025.1682437","url":null,"abstract":"<p><p>Dexterous manipulation is a crucial yet highly complex challenge in humanoid robotics, demanding precise, adaptable, and sample-efficient learning methods. As humanoid robots are usually designed to operate in human-centric environments and interact with everyday objects, mastering dexterous manipulation is critical for real-world deployment. Traditional approaches, such as reinforcement learning and imitation learning, have made significant strides, but they often struggle due to the unique challenges of real-world dexterous manipulation, including high-dimensional control, limited training data, and covariate shift. This survey provides a comprehensive overview of these challenges and reviews existing learning-based methods for real-world dexterous manipulation, spanning imitation learning, reinforcement learning, and hybrid approaches. A promising yet underexplored direction is interactive imitation learning, where human feedback actively refines a robot's behavior during training. While interactive imitation learning has shown success in various robotic tasks, its application to dexterous manipulation remains limited. To address this gap, we examine current interactive imitation learning techniques applied to other robotic tasks and discuss how these methods can be adapted to enhance dexterous manipulation. By synthesizing state-of-the-art research, this paper highlights key challenges, identifies gaps in current methodologies, and outlines potential directions for leveraging interactive imitation learning to improve dexterous robotic skills.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1682437"},"PeriodicalIF":3.0,"publicationDate":"2025-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12757213/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145900952","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-18eCollection Date: 2025-01-01DOI: 10.3389/frobt.2025.1625968
Shoaib Mohd Nasti, Zahoor Ahmad Najar, Mohammad Ahsan Chishti
Navigating in unknown environments without prior maps poses a significant challenge for mobile robots due to sparse rewards, dynamic obstacles, and limited prior knowledge. This paper presents an Improved Deep Reinforcement Learning (DRL) framework based on the Twin Delayed Deep Deterministic Policy Gradient (TD3) algorithm for adaptive mapless navigation. In addition to architectural enhancements, the proposed method offers theoretical benefits byincorporates a latent-state encoder and predictor module to transform high-dimensional sensor inputs into compact embeddings. This compact representation reduces the effective dimensionality of the state space, enabling smoother value-function approximation and mitigating overestimation errors common in actor-critic methods. It uses intrinsic rewards derived from prediction error in the latent space to promote exploration of novel states. The intrinsic reward encourages the agent to prioritize uncertain yet informative regions, improving exploration efficiency under sparse extrinsic reward signals and accelerating convergence. Furthermore, training stability is achieved through regularization of the latent space via maximum mean discrepancy (MMD) loss. By enforcing consistent latent dynamics, the MMD constraint reduces variance in target value estimation and results in more stable policy updates. Experimental results in simulated ROS2/Gazebo environments demonstrate that the proposed framework outperforms standard TD3 and other improved TD3 variants. Our model achieves a 93.1% success rate and a low 6.8% collision rate, reflecting efficient and safe goal-directed navigation. These findings confirm that combining intrinsic motivation, structured representation learning, and regularization-based stabilization produces more robust and generalizable policies for mapless mobile robot navigation.
{"title":"Adaptive mapless mobile robot navigation using deep reinforcement learning based improved TD3 algorithm.","authors":"Shoaib Mohd Nasti, Zahoor Ahmad Najar, Mohammad Ahsan Chishti","doi":"10.3389/frobt.2025.1625968","DOIUrl":"10.3389/frobt.2025.1625968","url":null,"abstract":"<p><p>Navigating in unknown environments without prior maps poses a significant challenge for mobile robots due to sparse rewards, dynamic obstacles, and limited prior knowledge. This paper presents an Improved Deep Reinforcement Learning (DRL) framework based on the Twin Delayed Deep Deterministic Policy Gradient (TD3) algorithm for adaptive mapless navigation. In addition to architectural enhancements, the proposed method offers theoretical benefits byincorporates a latent-state encoder and predictor module to transform high-dimensional sensor inputs into compact embeddings. This compact representation reduces the effective dimensionality of the state space, enabling smoother value-function approximation and mitigating overestimation errors common in actor-critic methods. It uses intrinsic rewards derived from prediction error in the latent space to promote exploration of novel states. The intrinsic reward encourages the agent to prioritize uncertain yet informative regions, improving exploration efficiency under sparse extrinsic reward signals and accelerating convergence. Furthermore, training stability is achieved through regularization of the latent space via maximum mean discrepancy (MMD) loss. By enforcing consistent latent dynamics, the MMD constraint reduces variance in target value estimation and results in more stable policy updates. Experimental results in simulated ROS2/Gazebo environments demonstrate that the proposed framework outperforms standard TD3 and other improved TD3 variants. Our model achieves a 93.1% success rate and a low 6.8% collision rate, reflecting efficient and safe goal-directed navigation. These findings confirm that combining intrinsic motivation, structured representation learning, and regularization-based stabilization produces more robust and generalizable policies for mapless mobile robot navigation.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1625968"},"PeriodicalIF":3.0,"publicationDate":"2025-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12756063/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145901283","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-18eCollection Date: 2025-01-01DOI: 10.3389/frobt.2025.1716801
Antonio Fleres, Luisa Damiano
The challenge of sustainability in robotics is usually addressed in terms of materials, energy, and efficiency. Yet the long-term viability of robotic systems also depends on organizational interdependencies that shape how they are maintained, experienced, and integrated into human environments. The present article develops this systemic perspective by advancing the hypothesis that such interdependencies can be understood as self-organizing dynamics. To examine this hypothesis, we analyze the case of Sony's AIBO robotic dogs. Originally designed for social companionship, AIBO units gave rise to a hybrid socio-technical ecosystem in which owners, repair specialists, and ritual practices sustained the robots long after their commercial discontinuation. Building on self-organization theory, we introduce the concept of the "robosphere" as an evolving network of relations in which robotic and human agents co-constitute resilient, sustainability-oriented ecosystems. Extending self-organization beyond its classical biological and technical domains, we argue that robotic sustainability must be framed not as a narrow technical issue but as a complex, multifactorial, and distributed process grounded in organizational interdependencies that integrate technical, cognitive, social, and affective dimensions of human life. Our contribution is twofold. First, we propose a modeling perspective that interprets sustainability in robotics as an emergent property of these interdependencies, exemplified by repair, reuse, and ritual practices that prolonged AIBO's lifecycle. Second, we outline a set of systemic design principles to inform the development of future human-robot ecosystems. By situating the AIBO case within the robospheric framework, this Hypothesis and Theory article advances the view that hybrid socio-technical collectives can generate sustainability from within. It outlines a programmatic horizon for rethinking social robotics not as disposable products, but as integral nodes of co-evolving, sustainable human-robot ecologies.
{"title":"From AIBO to robosphere. Organizational interdependencies in sustainable robotics.","authors":"Antonio Fleres, Luisa Damiano","doi":"10.3389/frobt.2025.1716801","DOIUrl":"10.3389/frobt.2025.1716801","url":null,"abstract":"<p><p>The challenge of sustainability in robotics is usually addressed in terms of materials, energy, and efficiency. Yet the long-term viability of robotic systems also depends on organizational interdependencies that shape how they are maintained, experienced, and integrated into human environments. The present article develops this systemic perspective by advancing the hypothesis that such interdependencies can be understood as self-organizing dynamics. To examine this hypothesis, we analyze the case of Sony's AIBO robotic dogs. Originally designed for social companionship, AIBO units gave rise to a hybrid socio-technical ecosystem in which owners, repair specialists, and ritual practices sustained the robots long after their commercial discontinuation. Building on self-organization theory, we introduce the concept of the \"robosphere\" as an evolving network of relations in which robotic and human agents co-constitute resilient, sustainability-oriented ecosystems. Extending self-organization beyond its classical biological and technical domains, we argue that robotic sustainability must be framed not as a narrow technical issue but as a complex, multifactorial, and distributed process grounded in organizational interdependencies that integrate technical, cognitive, social, and affective dimensions of human life. Our contribution is twofold. First, we propose a modeling perspective that interprets sustainability in robotics as an emergent property of these interdependencies, exemplified by repair, reuse, and ritual practices that prolonged AIBO's lifecycle. Second, we outline a set of systemic design principles to inform the development of future human-robot ecosystems. By situating the AIBO case within the robospheric framework, this Hypothesis and Theory article advances the view that hybrid socio-technical collectives can generate sustainability from within. It outlines a programmatic horizon for rethinking social robotics not as disposable products, but as integral nodes of co-evolving, sustainable human-robot ecologies.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1716801"},"PeriodicalIF":3.0,"publicationDate":"2025-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12756144/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145900511","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-18eCollection Date: 2025-01-01DOI: 10.3389/frobt.2025.1698591
Miranda Cravetz, Purva Vyas, Cindy Grimm, Joseph R Davidson
When a passively compliant hand grasps an object, slip events are often accompanied by flexion or extension of the finger or finger joints. This paper investigates whether a combination of orientation change and slip-induced vibration at the fingertip, as sensed by an inertial measurement unit (IMU), can be used as a slip indicator. Using a tendon-driven hand, which achieves passive compliance through underactuation, we performed 195 manipulation trials involving both slip and non-slip conditions. We then labeled this data automatically using motion-tracking data, and trained a convolutional neural network (CNN) to detect the slip events. Our results show that slip can be successfully detected from IMU data, even in the presence of other disturbances. This remains the case when deploying the trained network on data from a different gripper performing a new manipulation task on a previously unseen object.
{"title":"Slip detection for compliant robotic hands using inertial signals and deep learning.","authors":"Miranda Cravetz, Purva Vyas, Cindy Grimm, Joseph R Davidson","doi":"10.3389/frobt.2025.1698591","DOIUrl":"10.3389/frobt.2025.1698591","url":null,"abstract":"<p><p>When a passively compliant hand grasps an object, slip events are often accompanied by flexion or extension of the finger or finger joints. This paper investigates whether a combination of orientation change and slip-induced vibration at the fingertip, as sensed by an inertial measurement unit (IMU), can be used as a slip indicator. Using a tendon-driven hand, which achieves passive compliance through underactuation, we performed 195 manipulation trials involving both slip and non-slip conditions. We then labeled this data automatically using motion-tracking data, and trained a convolutional neural network (CNN) to detect the slip events. Our results show that slip can be successfully detected from IMU data, even in the presence of other disturbances. This remains the case when deploying the trained network on data from a different gripper performing a new manipulation task on a previously unseen object.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1698591"},"PeriodicalIF":3.0,"publicationDate":"2025-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12756126/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145900958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Robots that interact with humans are required to express emotions in ways that are appropriate to the context. While most prior research has focused primarily on basic emotions, real-life interactions demand more nuanced expressions. In this study, we extended the expressive capabilities of the android robot Nikola by implementing 63 facial expressions, covering not only complex emotions and physical conditions, but also differences in intensity. At Expo 2025 in Japan, more than 600 participants interacted with Nikola by describing situations in which they wanted the robot to perform facial expressions. The robot inferred emotions using a large language model and performed corresponding facial expressions. Questionnaire responses revealed that participants rated the robot's behavior as more appropriate and emotionally expressive when their instructions were abstract, compared to when they explicitly included emotions or physical states. This suggests that abstract instructions enhance perceived agency in the robot. We also investigated and discussed how impressions towards the robot varied depending on the expressions it performed and the personality traits of participants. This study contributes to the research field of human-robot interaction by demonstrating how adaptive facial expressions, in association with instruction styles, are linked to shaping human perceptions of social robots.
{"title":"Evaluating human perceptions of android robot facial expressions based on variations in instruction styles.","authors":"Ayaka Fujii, Carlos Toshinori Ishi, Kurima Sakai, Tomo Funayama, Ritsuko Iwai, Yusuke Takahashi, Takatsune Kumada, Takashi Minato","doi":"10.3389/frobt.2025.1728647","DOIUrl":"10.3389/frobt.2025.1728647","url":null,"abstract":"<p><p>Robots that interact with humans are required to express emotions in ways that are appropriate to the context. While most prior research has focused primarily on basic emotions, real-life interactions demand more nuanced expressions. In this study, we extended the expressive capabilities of the android robot Nikola by implementing 63 facial expressions, covering not only complex emotions and physical conditions, but also differences in intensity. At Expo 2025 in Japan, more than 600 participants interacted with Nikola by describing situations in which they wanted the robot to perform facial expressions. The robot inferred emotions using a large language model and performed corresponding facial expressions. Questionnaire responses revealed that participants rated the robot's behavior as more appropriate and emotionally expressive when their instructions were abstract, compared to when they explicitly included emotions or physical states. This suggests that abstract instructions enhance perceived agency in the robot. We also investigated and discussed how impressions towards the robot varied depending on the expressions it performed and the personality traits of participants. This study contributes to the research field of human-robot interaction by demonstrating how adaptive facial expressions, in association with instruction styles, are linked to shaping human perceptions of social robots.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1728647"},"PeriodicalIF":3.0,"publicationDate":"2025-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12747908/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145879310","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-16eCollection Date: 2025-01-01DOI: 10.3389/frobt.2025.1711675
Lina Moe, Benjamin Greenberg
This paper examines the evolving landscape of mobile robotics, focusing on challenges faced by roboticists working in industry when integrating robots into human-populated environments. Through interviews with sixteen industry professionals specializing in social mobile robotics, we examined two primary research questions: (1) What approaches to person detection and representation are used in industry? and (2) How does the relationship between industry and academia impact the research process? Our findings reveal diverse approaches to human detection, ranging from basic obstacle avoidance to advanced systems that differentiate among classes of humans. We suggest that robotic system design overall and human detection in particular are influenced by whether researchers use a framework of safety or sociality, how they approach building complex systems, and how they develop metrics for success. Additionally, we highlight the gaps and synergies between industry and academic research, particularly regarding commercial readiness and the incorporation of human-robot interaction (HRI) principles into robotic development. This study underscores the importance of addressing the complexities of social navigation in real-world settings and suggests that strengthening avenues of communication between industry and academia will help to shape a sustainable role for robots in the physical and social world.
{"title":"From complexity to commercial readiness: industry insights on bridging gaps in human-robot interaction and social robot navigation.","authors":"Lina Moe, Benjamin Greenberg","doi":"10.3389/frobt.2025.1711675","DOIUrl":"10.3389/frobt.2025.1711675","url":null,"abstract":"<p><p>This paper examines the evolving landscape of mobile robotics, focusing on challenges faced by roboticists working in industry when integrating robots into human-populated environments. Through interviews with sixteen industry professionals specializing in social mobile robotics, we examined two primary research questions: (1) What approaches to person detection and representation are used in industry? and (2) How does the relationship between industry and academia impact the research process? Our findings reveal diverse approaches to human detection, ranging from basic obstacle avoidance to advanced systems that differentiate among classes of humans. We suggest that robotic system design overall and human detection in particular are influenced by whether researchers use a framework of safety or sociality, how they approach building complex systems, and how they develop metrics for success. Additionally, we highlight the gaps and synergies between industry and academic research, particularly regarding commercial readiness and the incorporation of human-robot interaction (HRI) principles into robotic development. This study underscores the importance of addressing the complexities of social navigation in real-world settings and suggests that strengthening avenues of communication between industry and academia will help to shape a sustainable role for robots in the physical and social world.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1711675"},"PeriodicalIF":3.0,"publicationDate":"2025-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12747840/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145879261","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-16eCollection Date: 2025-01-01DOI: 10.3389/frobt.2025.1694952
Veera Venkata Ram Murali Krishna Rao Muvva, Kunjan Theodore Joseph, Yogesh Chawla, Santosh Pitla, Marilyn Wolf
Introduction: This study introduces a custom-built uncrewed aerial vehicle (UAV) designed for precision agriculture, emphasizing modularity, adaptability, and affordability. Unlike commercial UAVs restricted by proprietary systems, this platform offers full customization and advanced autonomy capabilities.
Methods: The UAV integrates a Cube Blue flight controller for low-level control with a Raspberry Pi 4 companion computer that runs a Model Predictive Control (MPC) algorithm for high-level trajectory optimization. Instead of conventional PID controllers, this work adopts an optimal control strategy using MPC. The system also incorporates Kalman filtering to enable adaptive mission planning and real-time coordination with a moving uncrewed ground vehicle (UGV). Testing was performed in both simulation and outdoor field environments, covering static and dynamic waypoint tracking as well as complex trajectories.
Results: The UAV performed figure-eight, curved, and wind-disturbed trajectories with root mean square error values consistently between 8 and 20 cm during autonomous operations, with slightly higher errors in more complex trajectories. The system successfully followed a moving UGV along nonlinear, curved paths.
Discussion: These results demonstrate that the proposed UAV platform is capable of precise autonomous navigation and real-time coordination, confirming its suitability for real-world agricultural applications and offering a flexible alternative to commercial UAV systems.
{"title":"Custom UAV with model predictive control for autonomous static and dynamic trajectory tracking in agricultural fields.","authors":"Veera Venkata Ram Murali Krishna Rao Muvva, Kunjan Theodore Joseph, Yogesh Chawla, Santosh Pitla, Marilyn Wolf","doi":"10.3389/frobt.2025.1694952","DOIUrl":"10.3389/frobt.2025.1694952","url":null,"abstract":"<p><strong>Introduction: </strong>This study introduces a custom-built uncrewed aerial vehicle (UAV) designed for precision agriculture, emphasizing modularity, adaptability, and affordability. Unlike commercial UAVs restricted by proprietary systems, this platform offers full customization and advanced autonomy capabilities.</p><p><strong>Methods: </strong>The UAV integrates a Cube Blue flight controller for low-level control with a Raspberry Pi 4 companion computer that runs a Model Predictive Control (MPC) algorithm for high-level trajectory optimization. Instead of conventional PID controllers, this work adopts an optimal control strategy using MPC. The system also incorporates Kalman filtering to enable adaptive mission planning and real-time coordination with a moving uncrewed ground vehicle (UGV). Testing was performed in both simulation and outdoor field environments, covering static and dynamic waypoint tracking as well as complex trajectories.</p><p><strong>Results: </strong>The UAV performed figure-eight, curved, and wind-disturbed trajectories with root mean square error values consistently between 8 and 20 cm during autonomous operations, with slightly higher errors in more complex trajectories. The system successfully followed a moving UGV along nonlinear, curved paths.</p><p><strong>Discussion: </strong>These results demonstrate that the proposed UAV platform is capable of precise autonomous navigation and real-time coordination, confirming its suitability for real-world agricultural applications and offering a flexible alternative to commercial UAV systems.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1694952"},"PeriodicalIF":3.0,"publicationDate":"2025-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12747843/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145879244","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}