Pub Date : 2026-01-13eCollection Date: 2026-01-01DOI: 10.3389/frobt.2026.1714310
Pietro Morasso
Although cognitive robotics is still a work in progress, the trend is to "free" robots from the assembly lines of the third industrial revolution and allow them to "enter human society" in large numbers and many forms, as forecasted by Industry 4.0 and beyond. Cognitive robots are expected to be intelligent, designed to learn from experience and adapt to real-world situations rather than being preprogrammed with specific actions for all possible stimuli and environmental conditions. Moreover, such robots are supposed to interact closely with human partners, cooperating with them, and this implies that robot cognition must incorporate, in a deep sense, ethical principles and evolve, in conflict situations, decision-making capabilities that can be perceived as wise. Intelligence (true vs. false), ethics (right vs. wrong), and wisdom (good vs. bad) are interrelated but independent features of human behavior, and a similar framework should also characterize the behavior of cognitive agents integrated in human society. The working hypothesis formulated in this paper is that the propensity to consolidate ethically guided behavior, possibly evolving to some kind of wisdom, is a cognitive architecture based on bio-inspired embodied cognition, educated through development and social interaction. In contrast, the problem with current AI foundation models applied to robotics (EAI) is that, although they can be super-intelligent, they are intrinsically disembodied and ethically agnostic, independent of how much information was absorbed during training. We suggest that the proposed alternative may facilitate social acceptance and thus make such robots civilized.
{"title":"Bio-inspired cognitive robotics vs. embodied AI for socially acceptable, civilized robots.","authors":"Pietro Morasso","doi":"10.3389/frobt.2026.1714310","DOIUrl":"https://doi.org/10.3389/frobt.2026.1714310","url":null,"abstract":"<p><p>Although cognitive robotics is still a work in progress, the trend is to \"free\" robots from the assembly lines of the third industrial revolution and allow them to \"enter human society\" in large numbers and many forms, as forecasted by Industry 4.0 and beyond. Cognitive robots are expected to be intelligent, designed to learn from experience and adapt to real-world situations rather than being preprogrammed with specific actions for all possible stimuli and environmental conditions. Moreover, such robots are supposed to interact closely with human partners, cooperating with them, and this implies that robot cognition must incorporate, in a deep sense, ethical principles and evolve, in conflict situations, decision-making capabilities that can be perceived as wise. Intelligence (true vs. false), ethics (right vs. wrong), and wisdom (good vs. bad) are interrelated but independent features of human behavior, and a similar framework should also characterize the behavior of cognitive agents integrated in human society. The working hypothesis formulated in this paper is that the propensity to consolidate ethically guided behavior, possibly evolving to some kind of wisdom, is a cognitive architecture based on bio-inspired embodied cognition, educated through development and social interaction. In contrast, the problem with current AI foundation models applied to robotics (EAI) is that, although they can be super-intelligent, they are intrinsically disembodied and ethically agnostic, independent of how much information was absorbed during training. We suggest that the proposed alternative may facilitate social acceptance and thus make such robots <i>civilized</i>.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"13 ","pages":"1714310"},"PeriodicalIF":3.0,"publicationDate":"2026-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12834747/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146094476","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-12eCollection Date: 2025-01-01DOI: 10.3389/frobt.2025.1719342
Laura Aymerich-Franch, Tarek Taha, Takahiro Miyashita, Hiroko Kamide, Hiroshi Ishiguro, Paolo Dario
Cybernetic avatars are hybrid interaction robots or digital representations that combine autonomous capabilities with teleoperated control. This study investigates the acceptance of cybernetic avatars, with particular emphasis on robot avatars for customer service. Specifically, we explore how acceptance varies as a function of modality (physical vs. virtual), robot appearance (e.g., android, robotic-looking, cartoonish), deployment settings (e.g., shopping malls, hotels, hospitals), and functional tasks (e.g., providing information, patrolling). To this end, we conducted a large-scale survey with over 1,000 participants in Dubai. As one of the most multicultural societies worldwide, Dubai offers a rare opportunity to capture opinions from multiple cultural clusters within a single setting simultaneously, thereby overcoming the limitations of nationally bound samples and providing a more global picture of acceptance. Overall, cybernetic avatars received a high level of acceptance, with physical robot avatars receiving higher acceptance than digital avatars. In terms of appearance, robot avatars with a highly anthropomorphic robotic appearance were the most accepted, followed by cartoonish designs and androids. Animal-like appearances received the lowest level of acceptance. Among the tasks, providing information and guidance was rated as the most valued. Shopping malls, airports, public transport stations, and museums were the settings with the highest acceptance, whereas healthcare-related spaces received lower levels of support. An analysis by community cluster revealed, among other findings, that Emirati respondents were particularly accepting of android appearances, whereas participants from the 'Other Asia' cluster were particularly accepting of cartoonish appearances. Our study underscores the importance of incorporating citizen feedback from the early stages of design and deployment to enhance societal acceptance of cybernetic avatars.
{"title":"Public acceptance of cybernetic avatars in the service sector: evidence from a large-scale survey.","authors":"Laura Aymerich-Franch, Tarek Taha, Takahiro Miyashita, Hiroko Kamide, Hiroshi Ishiguro, Paolo Dario","doi":"10.3389/frobt.2025.1719342","DOIUrl":"10.3389/frobt.2025.1719342","url":null,"abstract":"<p><p>Cybernetic avatars are hybrid interaction robots or digital representations that combine autonomous capabilities with teleoperated control. This study investigates the acceptance of cybernetic avatars, with particular emphasis on robot avatars for customer service. Specifically, we explore how acceptance varies as a function of modality (physical vs. virtual), robot appearance (e.g., android, robotic-looking, cartoonish), deployment settings (e.g., shopping malls, hotels, hospitals), and functional tasks (e.g., providing information, patrolling). To this end, we conducted a large-scale survey with over 1,000 participants in Dubai. As one of the most multicultural societies worldwide, Dubai offers a rare opportunity to capture opinions from multiple cultural clusters within a single setting simultaneously, thereby overcoming the limitations of nationally bound samples and providing a more global picture of acceptance. Overall, cybernetic avatars received a high level of acceptance, with physical robot avatars receiving higher acceptance than digital avatars. In terms of appearance, robot avatars with a highly anthropomorphic robotic appearance were the most accepted, followed by cartoonish designs and androids. Animal-like appearances received the lowest level of acceptance. Among the tasks, providing information and guidance was rated as the most valued. Shopping malls, airports, public transport stations, and museums were the settings with the highest acceptance, whereas healthcare-related spaces received lower levels of support. An analysis by community cluster revealed, among other findings, that Emirati respondents were particularly accepting of android appearances, whereas participants from the 'Other Asia' cluster were particularly accepting of cartoonish appearances. Our study underscores the importance of incorporating citizen feedback from the early stages of design and deployment to enhance societal acceptance of cybernetic avatars.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1719342"},"PeriodicalIF":3.0,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12832308/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146067573","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-12eCollection Date: 2025-01-01DOI: 10.3389/frobt.2025.1682200
Chengyandan Shen, Christoffer Sloth
This paper proposes an exploration-efficient deep reinforcement learning with reference (DRLR) policy framework for learning robotics tasks incorporating demonstrations. The DRLR framework is developed based on an imitation bootstrapped reinforcement learning (IBRL) algorithm. Here, we propose to improve IBRL by modifying the action selection module. The proposed action selection module provides a calibrated Q-value, which mitigates the bootstrapping error that otherwise leads to inefficient exploration. Furthermore, to prevent the reinforcement learning (RL) policy from converging to a sub-optimal policy, soft actor-critic (SAC) is used as the RL policy instead of twin delayed DDPG (TD3). The effectiveness of our method in mitigating the bootstrapping error and preventing overfitting is empirically validated by learning two robotics tasks: bucket loading and open drawer, which require extensive interactions with the environment. Simulation results also demonstrate the robustness of the DRLR framework across tasks with both low and high state-action dimensions and varying demonstration qualities. To evaluate the developed framework on a real-world industrial robotics task, the bucket loading task is deployed on a real wheel loader. The sim-to-real results validate the successful deployment of the DRLR framework.
{"title":"Solving robotics tasks with prior demonstration via exploration-efficient deep reinforcement learning.","authors":"Chengyandan Shen, Christoffer Sloth","doi":"10.3389/frobt.2025.1682200","DOIUrl":"https://doi.org/10.3389/frobt.2025.1682200","url":null,"abstract":"<p><p>This paper proposes an exploration-efficient deep reinforcement learning with reference (DRLR) policy framework for learning robotics tasks incorporating demonstrations. The DRLR framework is developed based on an imitation bootstrapped reinforcement learning (IBRL) algorithm. Here, we propose to improve IBRL by modifying the action selection module. The proposed action selection module provides a calibrated Q-value, which mitigates the bootstrapping error that otherwise leads to inefficient exploration. Furthermore, to prevent the reinforcement learning (RL) policy from converging to a sub-optimal policy, soft actor-critic (SAC) is used as the RL policy instead of twin delayed DDPG (TD3). The effectiveness of our method in mitigating the bootstrapping error and preventing overfitting is empirically validated by learning two robotics tasks: bucket loading and open drawer, which require extensive interactions with the environment. Simulation results also demonstrate the robustness of the DRLR framework across tasks with both low and high state-action dimensions and varying demonstration qualities. To evaluate the developed framework on a real-world industrial robotics task, the bucket loading task is deployed on a real wheel loader. The sim-to-real results validate the successful deployment of the DRLR framework.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1682200"},"PeriodicalIF":3.0,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12832430/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146067621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-12eCollection Date: 2025-01-01DOI: 10.3389/frobt.2025.1695319
Ulrik Pagh Schultz Lundquist, Saadia Afridi, Clément Berthelot, Nguyen Ngoc Dat, Kasper Hlebowicz, Elena Iannino, Lucie Laporte-Devylder, Guy Maalouf, Giacomo May, Kilian Meier, Constanza A Molina Catricheo, Edouard G A Rolland, Camille Rondeau Saint-Jean, Vandita Shukla, Tilo Burghardt, Anders Lyhne Christensen, Blair R Costelloe, Matthijs Damen, Andrea Flack, Kjeld Jensen, Henrik Skov Midtiby, Majid Mirmehdi, Fabio Remondino, Tom Richardson, Benjamin Risse, Devis Tuia, Magnus Wahlberg, Dylan Cawthorne, Steve Bullock, William Njoroge, Samuel Mutisya, Matt Watson, Elzbieta Pastucha
The rapid loss of biodiversity worldwide is unprecedented, with more species facing extinction now than at any other time in human history. Key factors contributing to this decline include habitat destruction, overexploitation, and climate change. There is an urgent need for innovative and effective conservation practices that leverage advanced technologies, such as autonomous drones, to monitor wildlife, manage human-wildlife conflicts, and protect endangered species. While drones have shown promise in conservation efforts, significant technological challenges remain, particularly in developing reliable, cost-effective solutions capable of operating in remote, unstructured, and open-ended environments. This paper explores the technological advancements necessary for deploying autonomous drones in nature conservation and presents the interdisciplinary scientific methodology of the WildDrone doctoral network as a basis for integrating research in drones, computer vision, and machine learning for ecological monitoring. We report preliminary results demonstrating the potential of these technologies to enhance biodiversity conservation efforts. Based on our preliminary findings, we expect that drones and computer vision will develop to further automate time consuming observational tasks in nature conservation, thus allowing human workers to ground conservation actions on evidence based on large and frequent data.
{"title":"WildDrone: autonomous drone technology for monitoring wildlife populations.","authors":"Ulrik Pagh Schultz Lundquist, Saadia Afridi, Clément Berthelot, Nguyen Ngoc Dat, Kasper Hlebowicz, Elena Iannino, Lucie Laporte-Devylder, Guy Maalouf, Giacomo May, Kilian Meier, Constanza A Molina Catricheo, Edouard G A Rolland, Camille Rondeau Saint-Jean, Vandita Shukla, Tilo Burghardt, Anders Lyhne Christensen, Blair R Costelloe, Matthijs Damen, Andrea Flack, Kjeld Jensen, Henrik Skov Midtiby, Majid Mirmehdi, Fabio Remondino, Tom Richardson, Benjamin Risse, Devis Tuia, Magnus Wahlberg, Dylan Cawthorne, Steve Bullock, William Njoroge, Samuel Mutisya, Matt Watson, Elzbieta Pastucha","doi":"10.3389/frobt.2025.1695319","DOIUrl":"10.3389/frobt.2025.1695319","url":null,"abstract":"<p><p>The rapid loss of biodiversity worldwide is unprecedented, with more species facing extinction now than at any other time in human history. Key factors contributing to this decline include habitat destruction, overexploitation, and climate change. There is an urgent need for innovative and effective conservation practices that leverage advanced technologies, such as autonomous drones, to monitor wildlife, manage human-wildlife conflicts, and protect endangered species. While drones have shown promise in conservation efforts, significant technological challenges remain, particularly in developing reliable, cost-effective solutions capable of operating in remote, unstructured, and open-ended environments. This paper explores the technological advancements necessary for deploying autonomous drones in nature conservation and presents the interdisciplinary scientific methodology of the WildDrone doctoral network as a basis for integrating research in drones, computer vision, and machine learning for ecological monitoring. We report preliminary results demonstrating the potential of these technologies to enhance biodiversity conservation efforts. Based on our preliminary findings, we expect that drones and computer vision will develop to further automate time consuming observational tasks in nature conservation, thus allowing human workers to ground conservation actions on evidence based on large and frequent data.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1695319"},"PeriodicalIF":3.0,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12865209/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146120724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-12eCollection Date: 2025-01-01DOI: 10.3389/frobt.2025.1725423
Adriana Hanulíková, Nils Frederik Tolksdorf, Sarah Kapp
Spoken language is one of the most powerful tools for humans to learn, exchange information, and build social relationships. An inherent feature of spoken language is large within- and between-speaker variation across linguistic levels, from sound acoustics to prosodic, lexical, syntactic, and pragmatic choices that differ from written language. Despite advancements in text-to-speech and language models used in social robots, synthetic speech lacks human-like variability. This limitation is especially critical in interactions with children, whose developmental needs require adaptive speech input and ethically responsible design. In child-robot interaction research, robot speech design has received less attention than appearance or multimodal features. We argue that speech variability in robots needs closer examination, considering both how humans adapt to robot speech and how robots could adjust to human speech. We discuss three tensions: (1) feasibility, because dynamic human speech variability is technically challenging to model; (2) desirability, because variability may both enhance and hinder learning, usability, and trust; and (3) ethics, because digital human-like speech risks deception, while robot speech varieties may support transparency. We suggest approaching variability as a design tool while being transparent about the robot's role and capabilities. The key question is which types of variation benefit children's socio-cognitive and language learning, at which developmental stage, in which context, depending on the robot's role and persona. Integrating insights across disciplines, we outline directions for studying how specific dimensions of variability affect comprehension, engagement, language learning, and for developing vocal interactivity that is engaging, ethically transparent, and developmentally appropriate.
{"title":"Robot speech: how variability matters for child-robot interactions.","authors":"Adriana Hanulíková, Nils Frederik Tolksdorf, Sarah Kapp","doi":"10.3389/frobt.2025.1725423","DOIUrl":"10.3389/frobt.2025.1725423","url":null,"abstract":"<p><p>Spoken language is one of the most powerful tools for humans to learn, exchange information, and build social relationships. An inherent feature of spoken language is large within- and between-speaker variation across linguistic levels, from sound acoustics to prosodic, lexical, syntactic, and pragmatic choices that differ from written language. Despite advancements in text-to-speech and language models used in social robots, synthetic speech lacks human-like variability. This limitation is especially critical in interactions with children, whose developmental needs require adaptive speech input and ethically responsible design. In child-robot interaction research, robot speech design has received less attention than appearance or multimodal features. We argue that speech variability in robots needs closer examination, considering both how humans adapt to robot speech and how robots could adjust to human speech. We discuss three tensions: (1) feasibility, because dynamic human speech variability is technically challenging to model; (2) desirability, because variability may both enhance and hinder learning, usability, and trust; and (3) ethics, because digital human-like speech risks deception, while robot speech varieties may support transparency. We suggest approaching variability as a design tool while being transparent about the robot's role and capabilities. The key question is which types of variation benefit children's socio-cognitive and language learning, at which developmental stage, in which context, depending on the robot's role and persona. Integrating insights across disciplines, we outline directions for studying how specific dimensions of variability affect comprehension, engagement, language learning, and for developing vocal interactivity that is engaging, ethically transparent, and developmentally appropriate.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1725423"},"PeriodicalIF":3.0,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12832417/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146067567","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-09eCollection Date: 2025-01-01DOI: 10.3389/frobt.2025.1766766
Sheri Markose, Tony Prescott, Georg Northoff, Emily Cross, Karl Friston
{"title":"Editorial: Narrow and general intelligence: embodied, self-referential social cognition and novelty production in humans, AI and robots.","authors":"Sheri Markose, Tony Prescott, Georg Northoff, Emily Cross, Karl Friston","doi":"10.3389/frobt.2025.1766766","DOIUrl":"10.3389/frobt.2025.1766766","url":null,"abstract":"","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1766766"},"PeriodicalIF":3.0,"publicationDate":"2026-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12827700/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146054604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-08eCollection Date: 2025-01-01DOI: 10.3389/frobt.2025.1737028
Paulina Tsvetkova
In the era of artificial intelligence and rapidly advancing robotics, the field of Human-Robot Interaction (HRI) has taken center stage across multiple domains, including psychology. From a psychological perspective, it is therefore essential to deepen our understanding of the factors that shape the quality of these interactions and their implications. This emphasis also aligns with the principles of Industry 5.0, which prioritize human well-being and use technologies to promote sustainable progress. The present study employs an exploratory mixed-method approach and aims to examine perceptions of warmth, competence and discomfort with the Furhat social robot in a psychological assessment setting. Specifically, we investigated young adults' interactions with the Furhat social robot while it administered the Depression, Anxiety and Stress Scale (DASS-21). Following the interaction, the participants completed the short version of the Robot Social Attributes Scale (RoSAS-SF) to assess perceived warmth, competence and discomfort, and provided qualitative feedback regarding their interactional experiences and acceptance of the robot. The findings provide preliminary insights into the respondents' perceptions of and openness toward robot-administered psychological screening, suggesting that the Furhat social robot may have potential as an assistive tool in mental health assessment contexts. These results highlight the need for further research with larger samples to examine the role of social robots in psychological practice more comprehensively.
{"title":"Perceptions of the Furhat social robot administering a mental health assessment: a pilot mixed-method exploration.","authors":"Paulina Tsvetkova","doi":"10.3389/frobt.2025.1737028","DOIUrl":"10.3389/frobt.2025.1737028","url":null,"abstract":"<p><p>In the era of artificial intelligence and rapidly advancing robotics, the field of Human-Robot Interaction (HRI) has taken center stage across multiple domains, including psychology. From a psychological perspective, it is therefore essential to deepen our understanding of the factors that shape the quality of these interactions and their implications. This emphasis also aligns with the principles of Industry 5.0, which prioritize human well-being and use technologies to promote sustainable progress. The present study employs an exploratory mixed-method approach and aims to examine perceptions of warmth, competence and discomfort with the Furhat social robot in a psychological assessment setting. Specifically, we investigated young adults' interactions with the Furhat social robot while it administered the Depression, Anxiety and Stress Scale (DASS-21). Following the interaction, the participants completed the short version of the Robot Social Attributes Scale (RoSAS-SF) to assess perceived warmth, competence and discomfort, and provided qualitative feedback regarding their interactional experiences and acceptance of the robot. The findings provide preliminary insights into the respondents' perceptions of and openness toward robot-administered psychological screening, suggesting that the Furhat social robot may have potential as an assistive tool in mental health assessment contexts. These results highlight the need for further research with larger samples to examine the role of social robots in psychological practice more comprehensively.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1737028"},"PeriodicalIF":3.0,"publicationDate":"2026-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12847930/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146087602","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-08eCollection Date: 2025-01-01DOI: 10.3389/frobt.2025.1732294
Marc-Anton Scheidl, Kristin Schuh, Marek Sierotowicz, Marcel Betsch, Claudio Castellini
Objective: Pilot study with ten healthy adults, testing whether a lightweight, low-cost knee orthosis equipped with EMG-driven impedance control reduces quadriceps muscle effort during the sit-to-stand (STS) transition.
Methods: Ten able-bodied adults performed 15 paced STS repetitions under three conditions: without orthosis (No-Ortho), orthosis worn unpowered (Ortho-OFF; friction-compensated), and orthosis actively powered (Ortho-ON). Surface electromyography (EMG) was recorded using 8-channel thigh bracelets on both legs. EMG signals from the braced leg were processed using ridge regression and slew-rate limiting to generate a normalized control signal that dynamically scales knee stiffness while maintaining constant damping. Median values and trial-to-trial variance of the average rectified EMG (ARV) were analyzed across four distinct movement phases (SIT, UP, STAND, DOWN) using linear mixed-effects models with log-transformed data and Bonferroni-adjusted planned contrasts.
Results: Powered assistance significantly reduced median bilateral ARV by 11% during the UP phase and 15% during the DOWN phase , with greater reductions (up to 21%) observed on the braced limb. Variance in muscle activation decreased substantially (by up to 44%) on the braced leg during the DOWN phase, suggesting more repeatable activation patterns and neuromuscular consistency across trials. No significant compensatory activation was observed in the contralateral limb. Additionally, within-session adaptation trends were observed as participants progressively increased preparatory torque during the SIT phase, while UP-phase ARV trended downward.
Conclusion: A lightweight, affordable knee orthosis employing a rapid ( 10 s), minimally calibrated EMG-driven impedance controller effectively reduces quadriceps muscle activation during STS without compromising natural movement coordination. Torque capacity limitations (16 Nm) may limit effectiveness for heavier users, and further research is needed to evaluate kinematic fidelity fully.
{"title":"EMG-controlled knee orthosis lowers effort in sit-to-stand.","authors":"Marc-Anton Scheidl, Kristin Schuh, Marek Sierotowicz, Marcel Betsch, Claudio Castellini","doi":"10.3389/frobt.2025.1732294","DOIUrl":"10.3389/frobt.2025.1732294","url":null,"abstract":"<p><strong>Objective: </strong>Pilot study with ten healthy adults, testing whether a lightweight, low-cost knee orthosis equipped with EMG-driven impedance control reduces quadriceps muscle effort during the sit-to-stand (STS) transition.</p><p><strong>Methods: </strong>Ten able-bodied adults performed 15 paced STS repetitions under three conditions: without orthosis (No-Ortho), orthosis worn unpowered (Ortho-OFF; friction-compensated), and orthosis actively powered (Ortho-ON). Surface electromyography (EMG) was recorded using 8-channel thigh bracelets on both legs. EMG signals from the braced leg were processed using ridge regression and slew-rate limiting to generate a normalized control signal that dynamically scales knee stiffness while maintaining constant damping. Median values and trial-to-trial variance of the average rectified EMG (ARV) were analyzed across four distinct movement phases (SIT, UP, STAND, DOWN) using linear mixed-effects models with log-transformed data and Bonferroni-adjusted planned contrasts.</p><p><strong>Results: </strong>Powered assistance significantly reduced median bilateral ARV by 11% during the UP phase and 15% during the DOWN phase <math><mrow><mo>(</mo> <mrow> <msub><mrow><mi>p</mi></mrow> <mrow><mi>adj</mi></mrow> </msub> <mo><</mo> <mn>0.001</mn></mrow> <mo>)</mo></mrow> </math> , with greater reductions (up to 21%) observed on the braced limb. Variance in muscle activation decreased substantially (by up to 44%) on the braced leg during the DOWN phase, suggesting more repeatable activation patterns and neuromuscular consistency across trials. No significant compensatory activation was observed in the contralateral limb. Additionally, within-session adaptation trends were observed as participants progressively increased preparatory torque during the SIT phase, while UP-phase ARV trended downward.</p><p><strong>Conclusion: </strong>A lightweight, affordable knee orthosis employing a rapid ( <math><mrow><mo>≈</mo></mrow> </math> 10 s), minimally calibrated EMG-driven impedance controller effectively reduces quadriceps muscle activation during STS without compromising natural movement coordination. Torque capacity limitations (16 Nm) may limit effectiveness for heavier users, and further research is needed to evaluate kinematic fidelity fully.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1732294"},"PeriodicalIF":3.0,"publicationDate":"2026-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12823904/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146047083","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-08eCollection Date: 2025-01-01DOI: 10.3389/frobt.2025.1773450
Cedomir Stanojevic, Casey Bennett, Jennifer Piatt, Selma Šabanović
{"title":"Editorial: Digital health applications of social robots.","authors":"Cedomir Stanojevic, Casey Bennett, Jennifer Piatt, Selma Šabanović","doi":"10.3389/frobt.2025.1773450","DOIUrl":"10.3389/frobt.2025.1773450","url":null,"abstract":"","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1773450"},"PeriodicalIF":3.0,"publicationDate":"2026-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12823485/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146054649","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-08eCollection Date: 2025-01-01DOI: 10.3389/frobt.2025.1706910
Beril Yalcinkaya, Micael S Couceiro, Salviano Soares, António Valente
Robotic fleet management systems are increasingly vital for sustainable operations in agriculture, forestry, and other field domains where labor shortages, efficiency, and environmental concerns intersect. We present FORMIGA, a fleet management framework that integrates human operators and autonomous robots into a collaborative ecosystem. FORMIGA combines standardised communication through the Robot Operating System with a user-centered interface for monitoring and intervention, while also leveraging large language models to generate executable task code from natural language prompts. The framework was deployed and validated within the FEROX project, a European initiative addressing sustainable berry harvesting in remote environments. In simulation-based trials, FORMIGA demonstrated adaptive task allocation, reduced operator workload, and faster task completion compared to semi-autonomous control, enabling dynamic labor division between humans and robots. By enhancing productivity, supporting worker safety, and promoting resource-efficient operations, FORMIGA contributes to the economic, and environmental dimensions of sustainability, offering a transferable tool for advancing human-robot collaboration in field robotics.
{"title":"FORMIGA: a fleet management framework for sustainable human-robot collaboration in field robotics.","authors":"Beril Yalcinkaya, Micael S Couceiro, Salviano Soares, António Valente","doi":"10.3389/frobt.2025.1706910","DOIUrl":"10.3389/frobt.2025.1706910","url":null,"abstract":"<p><p>Robotic fleet management systems are increasingly vital for sustainable operations in agriculture, forestry, and other field domains where labor shortages, efficiency, and environmental concerns intersect. We present FORMIGA, a fleet management framework that integrates human operators and autonomous robots into a collaborative ecosystem. FORMIGA combines standardised communication through the Robot Operating System with a user-centered interface for monitoring and intervention, while also leveraging large language models to generate executable task code from natural language prompts. The framework was deployed and validated within the FEROX project, a European initiative addressing sustainable berry harvesting in remote environments. In simulation-based trials, FORMIGA demonstrated adaptive task allocation, reduced operator workload, and faster task completion compared to semi-autonomous control, enabling dynamic labor division between humans and robots. By enhancing productivity, supporting worker safety, and promoting resource-efficient operations, FORMIGA contributes to the economic, and environmental dimensions of sustainability, offering a transferable tool for advancing human-robot collaboration in field robotics.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1706910"},"PeriodicalIF":3.0,"publicationDate":"2026-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12823971/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146054659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}