Pub Date : 2025-12-19eCollection Date: 2025-01-01DOI: 10.3389/frobt.2025.1708987
Matija Mavsar, Mihael Simonič, Aleš Ude
Collaboration between humans and robots is essential for optimizing the performance of complex tasks in industrial environments, reducing worker strain, and improving safety. This paper presents an integrated human-robot collaboration (HRC) system that leverages advanced intention recognition for real-time task sharing and interaction. By utilizing state-of-the-art human pose estimation combined with deep learning models, we developed a robust framework for detecting and predicting worker intentions. Specifically, we employed LSTM-based and transformer-based neural networks with convolutional and pooling layers to classify human hand trajectories, achieving higher accuracy compared to previous approaches. Additionally, our system integrates dynamic movement primitives (DMPs) for smooth robot motion transitions, collision prevention, and automatic motion onset/cessation detection. We validated the system in a real-world industrial assembly task, demonstrating its effectiveness in enhancing the fluency, safety, and efficiency of human-robot collaboration. The proposed method shows promise in improving real-time decision-making in collaborative environments, offering a safer and more intuitive interaction between humans and robots.
{"title":"Human intention recognition by deep LSTM and transformer networks for real-time human-robot collaboration.","authors":"Matija Mavsar, Mihael Simonič, Aleš Ude","doi":"10.3389/frobt.2025.1708987","DOIUrl":"10.3389/frobt.2025.1708987","url":null,"abstract":"<p><p>Collaboration between humans and robots is essential for optimizing the performance of complex tasks in industrial environments, reducing worker strain, and improving safety. This paper presents an integrated human-robot collaboration (HRC) system that leverages advanced intention recognition for real-time task sharing and interaction. By utilizing state-of-the-art human pose estimation combined with deep learning models, we developed a robust framework for detecting and predicting worker intentions. Specifically, we employed LSTM-based and transformer-based neural networks with convolutional and pooling layers to classify human hand trajectories, achieving higher accuracy compared to previous approaches. Additionally, our system integrates dynamic movement primitives (DMPs) for smooth robot motion transitions, collision prevention, and automatic motion onset/cessation detection. We validated the system in a real-world industrial assembly task, demonstrating its effectiveness in enhancing the fluency, safety, and efficiency of human-robot collaboration. The proposed method shows promise in improving real-time decision-making in collaborative environments, offering a safer and more intuitive interaction between humans and robots.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1708987"},"PeriodicalIF":3.0,"publicationDate":"2025-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12757248/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145900997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-19eCollection Date: 2025-01-01DOI: 10.3389/frobt.2025.1682437
Edgar Welte, Rania Rayyes
Dexterous manipulation is a crucial yet highly complex challenge in humanoid robotics, demanding precise, adaptable, and sample-efficient learning methods. As humanoid robots are usually designed to operate in human-centric environments and interact with everyday objects, mastering dexterous manipulation is critical for real-world deployment. Traditional approaches, such as reinforcement learning and imitation learning, have made significant strides, but they often struggle due to the unique challenges of real-world dexterous manipulation, including high-dimensional control, limited training data, and covariate shift. This survey provides a comprehensive overview of these challenges and reviews existing learning-based methods for real-world dexterous manipulation, spanning imitation learning, reinforcement learning, and hybrid approaches. A promising yet underexplored direction is interactive imitation learning, where human feedback actively refines a robot's behavior during training. While interactive imitation learning has shown success in various robotic tasks, its application to dexterous manipulation remains limited. To address this gap, we examine current interactive imitation learning techniques applied to other robotic tasks and discuss how these methods can be adapted to enhance dexterous manipulation. By synthesizing state-of-the-art research, this paper highlights key challenges, identifies gaps in current methodologies, and outlines potential directions for leveraging interactive imitation learning to improve dexterous robotic skills.
{"title":"Interactive imitation learning for dexterous robotic manipulation: challenges and perspectives-a survey.","authors":"Edgar Welte, Rania Rayyes","doi":"10.3389/frobt.2025.1682437","DOIUrl":"10.3389/frobt.2025.1682437","url":null,"abstract":"<p><p>Dexterous manipulation is a crucial yet highly complex challenge in humanoid robotics, demanding precise, adaptable, and sample-efficient learning methods. As humanoid robots are usually designed to operate in human-centric environments and interact with everyday objects, mastering dexterous manipulation is critical for real-world deployment. Traditional approaches, such as reinforcement learning and imitation learning, have made significant strides, but they often struggle due to the unique challenges of real-world dexterous manipulation, including high-dimensional control, limited training data, and covariate shift. This survey provides a comprehensive overview of these challenges and reviews existing learning-based methods for real-world dexterous manipulation, spanning imitation learning, reinforcement learning, and hybrid approaches. A promising yet underexplored direction is interactive imitation learning, where human feedback actively refines a robot's behavior during training. While interactive imitation learning has shown success in various robotic tasks, its application to dexterous manipulation remains limited. To address this gap, we examine current interactive imitation learning techniques applied to other robotic tasks and discuss how these methods can be adapted to enhance dexterous manipulation. By synthesizing state-of-the-art research, this paper highlights key challenges, identifies gaps in current methodologies, and outlines potential directions for leveraging interactive imitation learning to improve dexterous robotic skills.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1682437"},"PeriodicalIF":3.0,"publicationDate":"2025-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12757213/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145900952","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-18eCollection Date: 2025-01-01DOI: 10.3389/frobt.2025.1625968
Shoaib Mohd Nasti, Zahoor Ahmad Najar, Mohammad Ahsan Chishti
Navigating in unknown environments without prior maps poses a significant challenge for mobile robots due to sparse rewards, dynamic obstacles, and limited prior knowledge. This paper presents an Improved Deep Reinforcement Learning (DRL) framework based on the Twin Delayed Deep Deterministic Policy Gradient (TD3) algorithm for adaptive mapless navigation. In addition to architectural enhancements, the proposed method offers theoretical benefits byincorporates a latent-state encoder and predictor module to transform high-dimensional sensor inputs into compact embeddings. This compact representation reduces the effective dimensionality of the state space, enabling smoother value-function approximation and mitigating overestimation errors common in actor-critic methods. It uses intrinsic rewards derived from prediction error in the latent space to promote exploration of novel states. The intrinsic reward encourages the agent to prioritize uncertain yet informative regions, improving exploration efficiency under sparse extrinsic reward signals and accelerating convergence. Furthermore, training stability is achieved through regularization of the latent space via maximum mean discrepancy (MMD) loss. By enforcing consistent latent dynamics, the MMD constraint reduces variance in target value estimation and results in more stable policy updates. Experimental results in simulated ROS2/Gazebo environments demonstrate that the proposed framework outperforms standard TD3 and other improved TD3 variants. Our model achieves a 93.1% success rate and a low 6.8% collision rate, reflecting efficient and safe goal-directed navigation. These findings confirm that combining intrinsic motivation, structured representation learning, and regularization-based stabilization produces more robust and generalizable policies for mapless mobile robot navigation.
{"title":"Adaptive mapless mobile robot navigation using deep reinforcement learning based improved TD3 algorithm.","authors":"Shoaib Mohd Nasti, Zahoor Ahmad Najar, Mohammad Ahsan Chishti","doi":"10.3389/frobt.2025.1625968","DOIUrl":"10.3389/frobt.2025.1625968","url":null,"abstract":"<p><p>Navigating in unknown environments without prior maps poses a significant challenge for mobile robots due to sparse rewards, dynamic obstacles, and limited prior knowledge. This paper presents an Improved Deep Reinforcement Learning (DRL) framework based on the Twin Delayed Deep Deterministic Policy Gradient (TD3) algorithm for adaptive mapless navigation. In addition to architectural enhancements, the proposed method offers theoretical benefits byincorporates a latent-state encoder and predictor module to transform high-dimensional sensor inputs into compact embeddings. This compact representation reduces the effective dimensionality of the state space, enabling smoother value-function approximation and mitigating overestimation errors common in actor-critic methods. It uses intrinsic rewards derived from prediction error in the latent space to promote exploration of novel states. The intrinsic reward encourages the agent to prioritize uncertain yet informative regions, improving exploration efficiency under sparse extrinsic reward signals and accelerating convergence. Furthermore, training stability is achieved through regularization of the latent space via maximum mean discrepancy (MMD) loss. By enforcing consistent latent dynamics, the MMD constraint reduces variance in target value estimation and results in more stable policy updates. Experimental results in simulated ROS2/Gazebo environments demonstrate that the proposed framework outperforms standard TD3 and other improved TD3 variants. Our model achieves a 93.1% success rate and a low 6.8% collision rate, reflecting efficient and safe goal-directed navigation. These findings confirm that combining intrinsic motivation, structured representation learning, and regularization-based stabilization produces more robust and generalizable policies for mapless mobile robot navigation.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1625968"},"PeriodicalIF":3.0,"publicationDate":"2025-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12756063/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145901283","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-18eCollection Date: 2025-01-01DOI: 10.3389/frobt.2025.1716801
Antonio Fleres, Luisa Damiano
The challenge of sustainability in robotics is usually addressed in terms of materials, energy, and efficiency. Yet the long-term viability of robotic systems also depends on organizational interdependencies that shape how they are maintained, experienced, and integrated into human environments. The present article develops this systemic perspective by advancing the hypothesis that such interdependencies can be understood as self-organizing dynamics. To examine this hypothesis, we analyze the case of Sony's AIBO robotic dogs. Originally designed for social companionship, AIBO units gave rise to a hybrid socio-technical ecosystem in which owners, repair specialists, and ritual practices sustained the robots long after their commercial discontinuation. Building on self-organization theory, we introduce the concept of the "robosphere" as an evolving network of relations in which robotic and human agents co-constitute resilient, sustainability-oriented ecosystems. Extending self-organization beyond its classical biological and technical domains, we argue that robotic sustainability must be framed not as a narrow technical issue but as a complex, multifactorial, and distributed process grounded in organizational interdependencies that integrate technical, cognitive, social, and affective dimensions of human life. Our contribution is twofold. First, we propose a modeling perspective that interprets sustainability in robotics as an emergent property of these interdependencies, exemplified by repair, reuse, and ritual practices that prolonged AIBO's lifecycle. Second, we outline a set of systemic design principles to inform the development of future human-robot ecosystems. By situating the AIBO case within the robospheric framework, this Hypothesis and Theory article advances the view that hybrid socio-technical collectives can generate sustainability from within. It outlines a programmatic horizon for rethinking social robotics not as disposable products, but as integral nodes of co-evolving, sustainable human-robot ecologies.
{"title":"From AIBO to robosphere. Organizational interdependencies in sustainable robotics.","authors":"Antonio Fleres, Luisa Damiano","doi":"10.3389/frobt.2025.1716801","DOIUrl":"10.3389/frobt.2025.1716801","url":null,"abstract":"<p><p>The challenge of sustainability in robotics is usually addressed in terms of materials, energy, and efficiency. Yet the long-term viability of robotic systems also depends on organizational interdependencies that shape how they are maintained, experienced, and integrated into human environments. The present article develops this systemic perspective by advancing the hypothesis that such interdependencies can be understood as self-organizing dynamics. To examine this hypothesis, we analyze the case of Sony's AIBO robotic dogs. Originally designed for social companionship, AIBO units gave rise to a hybrid socio-technical ecosystem in which owners, repair specialists, and ritual practices sustained the robots long after their commercial discontinuation. Building on self-organization theory, we introduce the concept of the \"robosphere\" as an evolving network of relations in which robotic and human agents co-constitute resilient, sustainability-oriented ecosystems. Extending self-organization beyond its classical biological and technical domains, we argue that robotic sustainability must be framed not as a narrow technical issue but as a complex, multifactorial, and distributed process grounded in organizational interdependencies that integrate technical, cognitive, social, and affective dimensions of human life. Our contribution is twofold. First, we propose a modeling perspective that interprets sustainability in robotics as an emergent property of these interdependencies, exemplified by repair, reuse, and ritual practices that prolonged AIBO's lifecycle. Second, we outline a set of systemic design principles to inform the development of future human-robot ecosystems. By situating the AIBO case within the robospheric framework, this Hypothesis and Theory article advances the view that hybrid socio-technical collectives can generate sustainability from within. It outlines a programmatic horizon for rethinking social robotics not as disposable products, but as integral nodes of co-evolving, sustainable human-robot ecologies.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1716801"},"PeriodicalIF":3.0,"publicationDate":"2025-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12756144/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145900511","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-18eCollection Date: 2025-01-01DOI: 10.3389/frobt.2025.1698591
Miranda Cravetz, Purva Vyas, Cindy Grimm, Joseph R Davidson
When a passively compliant hand grasps an object, slip events are often accompanied by flexion or extension of the finger or finger joints. This paper investigates whether a combination of orientation change and slip-induced vibration at the fingertip, as sensed by an inertial measurement unit (IMU), can be used as a slip indicator. Using a tendon-driven hand, which achieves passive compliance through underactuation, we performed 195 manipulation trials involving both slip and non-slip conditions. We then labeled this data automatically using motion-tracking data, and trained a convolutional neural network (CNN) to detect the slip events. Our results show that slip can be successfully detected from IMU data, even in the presence of other disturbances. This remains the case when deploying the trained network on data from a different gripper performing a new manipulation task on a previously unseen object.
{"title":"Slip detection for compliant robotic hands using inertial signals and deep learning.","authors":"Miranda Cravetz, Purva Vyas, Cindy Grimm, Joseph R Davidson","doi":"10.3389/frobt.2025.1698591","DOIUrl":"10.3389/frobt.2025.1698591","url":null,"abstract":"<p><p>When a passively compliant hand grasps an object, slip events are often accompanied by flexion or extension of the finger or finger joints. This paper investigates whether a combination of orientation change and slip-induced vibration at the fingertip, as sensed by an inertial measurement unit (IMU), can be used as a slip indicator. Using a tendon-driven hand, which achieves passive compliance through underactuation, we performed 195 manipulation trials involving both slip and non-slip conditions. We then labeled this data automatically using motion-tracking data, and trained a convolutional neural network (CNN) to detect the slip events. Our results show that slip can be successfully detected from IMU data, even in the presence of other disturbances. This remains the case when deploying the trained network on data from a different gripper performing a new manipulation task on a previously unseen object.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1698591"},"PeriodicalIF":3.0,"publicationDate":"2025-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12756126/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145900958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Robots that interact with humans are required to express emotions in ways that are appropriate to the context. While most prior research has focused primarily on basic emotions, real-life interactions demand more nuanced expressions. In this study, we extended the expressive capabilities of the android robot Nikola by implementing 63 facial expressions, covering not only complex emotions and physical conditions, but also differences in intensity. At Expo 2025 in Japan, more than 600 participants interacted with Nikola by describing situations in which they wanted the robot to perform facial expressions. The robot inferred emotions using a large language model and performed corresponding facial expressions. Questionnaire responses revealed that participants rated the robot's behavior as more appropriate and emotionally expressive when their instructions were abstract, compared to when they explicitly included emotions or physical states. This suggests that abstract instructions enhance perceived agency in the robot. We also investigated and discussed how impressions towards the robot varied depending on the expressions it performed and the personality traits of participants. This study contributes to the research field of human-robot interaction by demonstrating how adaptive facial expressions, in association with instruction styles, are linked to shaping human perceptions of social robots.
{"title":"Evaluating human perceptions of android robot facial expressions based on variations in instruction styles.","authors":"Ayaka Fujii, Carlos Toshinori Ishi, Kurima Sakai, Tomo Funayama, Ritsuko Iwai, Yusuke Takahashi, Takatsune Kumada, Takashi Minato","doi":"10.3389/frobt.2025.1728647","DOIUrl":"10.3389/frobt.2025.1728647","url":null,"abstract":"<p><p>Robots that interact with humans are required to express emotions in ways that are appropriate to the context. While most prior research has focused primarily on basic emotions, real-life interactions demand more nuanced expressions. In this study, we extended the expressive capabilities of the android robot Nikola by implementing 63 facial expressions, covering not only complex emotions and physical conditions, but also differences in intensity. At Expo 2025 in Japan, more than 600 participants interacted with Nikola by describing situations in which they wanted the robot to perform facial expressions. The robot inferred emotions using a large language model and performed corresponding facial expressions. Questionnaire responses revealed that participants rated the robot's behavior as more appropriate and emotionally expressive when their instructions were abstract, compared to when they explicitly included emotions or physical states. This suggests that abstract instructions enhance perceived agency in the robot. We also investigated and discussed how impressions towards the robot varied depending on the expressions it performed and the personality traits of participants. This study contributes to the research field of human-robot interaction by demonstrating how adaptive facial expressions, in association with instruction styles, are linked to shaping human perceptions of social robots.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1728647"},"PeriodicalIF":3.0,"publicationDate":"2025-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12747908/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145879310","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-16eCollection Date: 2025-01-01DOI: 10.3389/frobt.2025.1711675
Lina Moe, Benjamin Greenberg
This paper examines the evolving landscape of mobile robotics, focusing on challenges faced by roboticists working in industry when integrating robots into human-populated environments. Through interviews with sixteen industry professionals specializing in social mobile robotics, we examined two primary research questions: (1) What approaches to person detection and representation are used in industry? and (2) How does the relationship between industry and academia impact the research process? Our findings reveal diverse approaches to human detection, ranging from basic obstacle avoidance to advanced systems that differentiate among classes of humans. We suggest that robotic system design overall and human detection in particular are influenced by whether researchers use a framework of safety or sociality, how they approach building complex systems, and how they develop metrics for success. Additionally, we highlight the gaps and synergies between industry and academic research, particularly regarding commercial readiness and the incorporation of human-robot interaction (HRI) principles into robotic development. This study underscores the importance of addressing the complexities of social navigation in real-world settings and suggests that strengthening avenues of communication between industry and academia will help to shape a sustainable role for robots in the physical and social world.
{"title":"From complexity to commercial readiness: industry insights on bridging gaps in human-robot interaction and social robot navigation.","authors":"Lina Moe, Benjamin Greenberg","doi":"10.3389/frobt.2025.1711675","DOIUrl":"10.3389/frobt.2025.1711675","url":null,"abstract":"<p><p>This paper examines the evolving landscape of mobile robotics, focusing on challenges faced by roboticists working in industry when integrating robots into human-populated environments. Through interviews with sixteen industry professionals specializing in social mobile robotics, we examined two primary research questions: (1) What approaches to person detection and representation are used in industry? and (2) How does the relationship between industry and academia impact the research process? Our findings reveal diverse approaches to human detection, ranging from basic obstacle avoidance to advanced systems that differentiate among classes of humans. We suggest that robotic system design overall and human detection in particular are influenced by whether researchers use a framework of safety or sociality, how they approach building complex systems, and how they develop metrics for success. Additionally, we highlight the gaps and synergies between industry and academic research, particularly regarding commercial readiness and the incorporation of human-robot interaction (HRI) principles into robotic development. This study underscores the importance of addressing the complexities of social navigation in real-world settings and suggests that strengthening avenues of communication between industry and academia will help to shape a sustainable role for robots in the physical and social world.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1711675"},"PeriodicalIF":3.0,"publicationDate":"2025-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12747840/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145879261","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-16eCollection Date: 2025-01-01DOI: 10.3389/frobt.2025.1694952
Veera Venkata Ram Murali Krishna Rao Muvva, Kunjan Theodore Joseph, Yogesh Chawla, Santosh Pitla, Marilyn Wolf
Introduction: This study introduces a custom-built uncrewed aerial vehicle (UAV) designed for precision agriculture, emphasizing modularity, adaptability, and affordability. Unlike commercial UAVs restricted by proprietary systems, this platform offers full customization and advanced autonomy capabilities.
Methods: The UAV integrates a Cube Blue flight controller for low-level control with a Raspberry Pi 4 companion computer that runs a Model Predictive Control (MPC) algorithm for high-level trajectory optimization. Instead of conventional PID controllers, this work adopts an optimal control strategy using MPC. The system also incorporates Kalman filtering to enable adaptive mission planning and real-time coordination with a moving uncrewed ground vehicle (UGV). Testing was performed in both simulation and outdoor field environments, covering static and dynamic waypoint tracking as well as complex trajectories.
Results: The UAV performed figure-eight, curved, and wind-disturbed trajectories with root mean square error values consistently between 8 and 20 cm during autonomous operations, with slightly higher errors in more complex trajectories. The system successfully followed a moving UGV along nonlinear, curved paths.
Discussion: These results demonstrate that the proposed UAV platform is capable of precise autonomous navigation and real-time coordination, confirming its suitability for real-world agricultural applications and offering a flexible alternative to commercial UAV systems.
{"title":"Custom UAV with model predictive control for autonomous static and dynamic trajectory tracking in agricultural fields.","authors":"Veera Venkata Ram Murali Krishna Rao Muvva, Kunjan Theodore Joseph, Yogesh Chawla, Santosh Pitla, Marilyn Wolf","doi":"10.3389/frobt.2025.1694952","DOIUrl":"10.3389/frobt.2025.1694952","url":null,"abstract":"<p><strong>Introduction: </strong>This study introduces a custom-built uncrewed aerial vehicle (UAV) designed for precision agriculture, emphasizing modularity, adaptability, and affordability. Unlike commercial UAVs restricted by proprietary systems, this platform offers full customization and advanced autonomy capabilities.</p><p><strong>Methods: </strong>The UAV integrates a Cube Blue flight controller for low-level control with a Raspberry Pi 4 companion computer that runs a Model Predictive Control (MPC) algorithm for high-level trajectory optimization. Instead of conventional PID controllers, this work adopts an optimal control strategy using MPC. The system also incorporates Kalman filtering to enable adaptive mission planning and real-time coordination with a moving uncrewed ground vehicle (UGV). Testing was performed in both simulation and outdoor field environments, covering static and dynamic waypoint tracking as well as complex trajectories.</p><p><strong>Results: </strong>The UAV performed figure-eight, curved, and wind-disturbed trajectories with root mean square error values consistently between 8 and 20 cm during autonomous operations, with slightly higher errors in more complex trajectories. The system successfully followed a moving UGV along nonlinear, curved paths.</p><p><strong>Discussion: </strong>These results demonstrate that the proposed UAV platform is capable of precise autonomous navigation and real-time coordination, confirming its suitability for real-world agricultural applications and offering a flexible alternative to commercial UAV systems.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1694952"},"PeriodicalIF":3.0,"publicationDate":"2025-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12747843/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145879244","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-12eCollection Date: 2025-01-01DOI: 10.3389/frobt.2025.1659784
Sofia Thunberg, Erik Lagerstedt, Anna Jönsson, Anna Lena Sundell
Introduction: As robotic technologies become increasingly integrated into care settings, it is critical to assess their impact within the complexity of real-world contexts. This exploratory study examines the introduction of a robot cat for children with Autism Spectrum Disorder (ASD) in a specialist dental care unit. Children with ASD often face challenges in dental care, including anxiety, sensory sensitivities, and difficulty with collaboration. The study investigates if a robot cat can provide psychosocial support to the patients.
Methods: Ten patients, aged 5-10, participated in the 12-months study, each undergoing one baseline session without the robot and 3-5 subsequent visits with the robot, yielding 37 sessions of video data.
Results: Reflexive thematic analysis revealed three key themes: the robot cat can enhance training and treatment, robot cats can serve as a beneficial but a non-essential tool, and robot cats can sometimes hinder progress in training and treatment. These findings highlight significant individual variation in how the robot was experienced, shaped by context, timing, and emotional state. The robot's role was not universally positive or passive; its effectiveness depended on how it was integrated into personalised care strategies by the dental hygienist, guardians, and the patients themselves.
Discussion: This study underscores the importance of tailoring technological interventions in care, advocating for cautious, context-sensitive use rather than one-size-fits-all solutions. Future work should further explore adaptive, individualised deployment.
{"title":"Exploring companion robots for children with autism spectrum disorder: a reflexive thematic analysis in specialist dental care.","authors":"Sofia Thunberg, Erik Lagerstedt, Anna Jönsson, Anna Lena Sundell","doi":"10.3389/frobt.2025.1659784","DOIUrl":"10.3389/frobt.2025.1659784","url":null,"abstract":"<p><strong>Introduction: </strong>As robotic technologies become increasingly integrated into care settings, it is critical to assess their impact within the complexity of real-world contexts. This exploratory study examines the introduction of a robot cat for children with Autism Spectrum Disorder (ASD) in a specialist dental care unit. Children with ASD often face challenges in dental care, including anxiety, sensory sensitivities, and difficulty with collaboration. The study investigates if a robot cat can provide psychosocial support to the patients.</p><p><strong>Methods: </strong>Ten patients, aged 5-10, participated in the 12-months study, each undergoing one baseline session without the robot and 3-5 subsequent visits with the robot, yielding 37 sessions of video data.</p><p><strong>Results: </strong>Reflexive thematic analysis revealed three key themes: the robot cat can <i>enhance training and treatment</i>, robot cats can serve as a <i>beneficial but a non-essential tool</i>, and robot cats can sometimes <i>hinder progress in training and treatment</i>. These findings highlight significant individual variation in how the robot was experienced, shaped by context, timing, and emotional state. The robot's role was not universally positive or passive; its effectiveness depended on how it was integrated into personalised care strategies by the dental hygienist, guardians, and the patients themselves.</p><p><strong>Discussion: </strong>This study underscores the importance of tailoring technological interventions in care, advocating for cautious, context-sensitive use rather than one-size-fits-all solutions. Future work should further explore adaptive, individualised deployment.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1659784"},"PeriodicalIF":3.0,"publicationDate":"2025-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12740894/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145851141","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-11eCollection Date: 2025-01-01DOI: 10.3389/frobt.2025.1750134
Nikolos Gurney, Dana Hughes, David V Pynadath, Ning Wang
{"title":"Editorial: Theory of mind in robots and intelligent systems.","authors":"Nikolos Gurney, Dana Hughes, David V Pynadath, Ning Wang","doi":"10.3389/frobt.2025.1750134","DOIUrl":"10.3389/frobt.2025.1750134","url":null,"abstract":"","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1750134"},"PeriodicalIF":3.0,"publicationDate":"2025-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12738163/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145851205","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}