Pub Date : 2025-09-10DOI: 10.1126/scirobotics.adu5771
Lauren L. Wright, Pooja Vegesna, Joseph E. Michaelis, Bilge Mutlu, Sarah Sebo
Reading fluency is a vital building block for developing literacy, yet the best way to practice fluency—reading aloud—can cause anxiety severe enough to inhibit literacy development in ways that can have an adverse effect on students through adulthood. One promising intervention to mitigate oral reading anxiety is to have children read aloud to a robot. Although observations in prior work have suggested that people likely feel more comfortable in the presence of a robot instead of a human, few studies have empirically demonstrated that people feel less anxious performing in front of a robot compared with a human or used objective physiological indicators to identify decreased anxiety. To investigate whether a robotic reading companion could reduce reading anxiety felt by children, we conducted a within-subjects study where children aged 8 to 11 years (n = 52) read aloud to a human and a robot individually while being monitored for physiological responses associated with anxiety. We found that children exhibited fewer physiological indicators of anxiety, specifically vocal jitter and heart rate variability, when reading to the robot compared with reading to a person. This paper provides strong evidence that a robot’s presence has an effect on the anxiety a person experiences while doing a task, offering justification for the use of robots in a wide-reaching array of social interactions that may be anxiety inducing.
{"title":"Robotic reading companions can mitigate oral reading anxiety in children","authors":"Lauren L. Wright, Pooja Vegesna, Joseph E. Michaelis, Bilge Mutlu, Sarah Sebo","doi":"10.1126/scirobotics.adu5771","DOIUrl":"10.1126/scirobotics.adu5771","url":null,"abstract":"<div >Reading fluency is a vital building block for developing literacy, yet the best way to practice fluency—reading aloud—can cause anxiety severe enough to inhibit literacy development in ways that can have an adverse effect on students through adulthood. One promising intervention to mitigate oral reading anxiety is to have children read aloud to a robot. Although observations in prior work have suggested that people likely feel more comfortable in the presence of a robot instead of a human, few studies have empirically demonstrated that people feel less anxious performing in front of a robot compared with a human or used objective physiological indicators to identify decreased anxiety. To investigate whether a robotic reading companion could reduce reading anxiety felt by children, we conducted a within-subjects study where children aged 8 to 11 years (<i>n</i> = 52) read aloud to a human and a robot individually while being monitored for physiological responses associated with anxiety. We found that children exhibited fewer physiological indicators of anxiety, specifically vocal jitter and heart rate variability, when reading to the robot compared with reading to a person. This paper provides strong evidence that a robot’s presence has an effect on the anxiety a person experiences while doing a task, offering justification for the use of robots in a wide-reaching array of social interactions that may be anxiety inducing.</div>","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"10 106","pages":""},"PeriodicalIF":27.5,"publicationDate":"2025-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145028467","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-10DOI: 10.1126/scirobotics.adu6123
Joseph E. Michaelis, Bilge Mutlu
Family-centered integration is critical for the success of in-home educational robots.
以家庭为中心的整合对于家庭教育机器人的成功至关重要。
{"title":"How can educational robots enhance family life? Through careful integration","authors":"Joseph E. Michaelis, Bilge Mutlu","doi":"10.1126/scirobotics.adu6123","DOIUrl":"10.1126/scirobotics.adu6123","url":null,"abstract":"<div >Family-centered integration is critical for the success of in-home educational robots.</div>","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"10 106","pages":""},"PeriodicalIF":27.5,"publicationDate":"2025-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145028495","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
According to productive failure (PF) theory, experiencing failure during problem-solving can enhance students’ knowledge acquisition in subsequent instruction. However, challenging students with problems beyond their current capabilities may strain their skills, prior knowledge, and emotional well-being. To address this, we designed a social robot–assisted teaching activity in which students observed a robot’s unsuccessful problem-solving attempts, offering a PF-like preparatory effect without requiring direct failure. We conducted two classroom-based studies in a middle school setting to evaluate the method’s effectiveness. In study 1 (N = 135), we compared three instructional methods—observing robot failure (RF), individual problem-solving failure, and direct instruction—in an eighth-grade mathematics lesson. Students in the RF condition showed the greatest gains in conceptual understanding and reported lower social pressure, although no significant differences were found in procedural knowledge or knowledge transfer. Follow-up study 2 (N = 110) further validated the method’s effectiveness in supporting knowledge acquisition after a 2-week robot-involved adaptation phase, when the novelty effect had largely subsided. Students confirmed their perception of the robot as a peer, and they offered positive evaluations of its intelligence and neutral views of its anthropomorphism. Our findings suggest that observing the robot’s failure has a comparable, or even greater, effect on knowledge acquisition than experiencing failure firsthand. These results underscore the value of social robots as peers in science, technology, engineering, and mathematics education and highlight the potential of integrating robotics with evidence-based teaching strategies to enhance learning outcomes.
{"title":"Observing a robot peer’s failures facilitates students’ classroom learning","authors":"Liuqing Chen, Yu Cai, Yuyang Fang, Ziqi Yang, Duowei Xia, Jiaxiang You, Shuhong Xiao, Yaxuan Song, Lingwei Zhan, Juanjuan Chen, Lingyun Sun","doi":"10.1126/scirobotics.adu5257","DOIUrl":"10.1126/scirobotics.adu5257","url":null,"abstract":"<div >According to productive failure (PF) theory, experiencing failure during problem-solving can enhance students’ knowledge acquisition in subsequent instruction. However, challenging students with problems beyond their current capabilities may strain their skills, prior knowledge, and emotional well-being. To address this, we designed a social robot–assisted teaching activity in which students observed a robot’s unsuccessful problem-solving attempts, offering a PF-like preparatory effect without requiring direct failure. We conducted two classroom-based studies in a middle school setting to evaluate the method’s effectiveness. In study 1 (<i>N</i> = 135), we compared three instructional methods—observing robot failure (RF), individual problem-solving failure, and direct instruction—in an eighth-grade mathematics lesson. Students in the RF condition showed the greatest gains in conceptual understanding and reported lower social pressure, although no significant differences were found in procedural knowledge or knowledge transfer. Follow-up study 2 (<i>N</i> = 110) further validated the method’s effectiveness in supporting knowledge acquisition after a 2-week robot-involved adaptation phase, when the novelty effect had largely subsided. Students confirmed their perception of the robot as a peer, and they offered positive evaluations of its intelligence and neutral views of its anthropomorphism. Our findings suggest that observing the robot’s failure has a comparable, or even greater, effect on knowledge acquisition than experiencing failure firsthand. These results underscore the value of social robots as peers in science, technology, engineering, and mathematics education and highlight the potential of integrating robotics with evidence-based teaching strategies to enhance learning outcomes.</div>","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"10 106","pages":""},"PeriodicalIF":27.5,"publicationDate":"2025-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145028483","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-03DOI: 10.1126/scirobotics.adu5830
Hyegi Min, Yue Wang, Jiaojiao Wang, Xiuyuan Li, Woong Kim, Onur Aydin, Sehong Kang, Jae-Sung You, Jongwon Lim, Katy Wolhaupter, Yikang Xu, Zhengguang Zhu, Jianyu Gu, Xinming Li, Yongdeok Kim, Tarun Rao, Hyun Joon Kong, Taher A. Saif, Yonggang Huang, John A. Rogers, Rashid Bashir
Neuronal control of skeletal muscle function is ubiquitous across species for locomotion and doing work. In particular, emergent behaviors of neurons in biohybrid neuromuscular systems can advance bioinspired locomotion research. Although recent studies have demonstrated that chemical or optogenetic stimulation of neurons can control muscular actuation through the neuromuscular junction (NMJ), the correlation between neuronal activities and resulting modulation in the muscle responses is less understood, hindering the engineering of high-level functional biohybrid systems. Here, we developed NMJ-based biohybrid crawling robots with optogenetic mouse motor neurons, skeletal muscles, 3D-printed hydrogel scaffolds, and integrated onboard wireless micro–light-emitting diode (μLED)–based optoelectronics. We investigated the coupling of the light stimulation and neuromuscular actuation through power spectral density (PSD) analysis. We verified the modulation of the mechanical functionality of the robot depending on the frequency of the optical stimulation to the neural tissue. We demonstrated continued muscle contraction up to 20 minutes after a 1-minute-long pulsed 2-hertz optical stimulation of the neural tissue. Furthermore, the robots were shown to maintain their mechanical functionality for more than 2 weeks. This study provides insights into reliable neuronal control with optoelectronics, supporting advancements in neuronal modulation, biohybrid intelligence, and automation.
{"title":"Optogenetic neuromuscular actuation of a miniature electronic biohybrid robot","authors":"Hyegi Min, Yue Wang, Jiaojiao Wang, Xiuyuan Li, Woong Kim, Onur Aydin, Sehong Kang, Jae-Sung You, Jongwon Lim, Katy Wolhaupter, Yikang Xu, Zhengguang Zhu, Jianyu Gu, Xinming Li, Yongdeok Kim, Tarun Rao, Hyun Joon Kong, Taher A. Saif, Yonggang Huang, John A. Rogers, Rashid Bashir","doi":"10.1126/scirobotics.adu5830","DOIUrl":"10.1126/scirobotics.adu5830","url":null,"abstract":"<div >Neuronal control of skeletal muscle function is ubiquitous across species for locomotion and doing work. In particular, emergent behaviors of neurons in biohybrid neuromuscular systems can advance bioinspired locomotion research. Although recent studies have demonstrated that chemical or optogenetic stimulation of neurons can control muscular actuation through the neuromuscular junction (NMJ), the correlation between neuronal activities and resulting modulation in the muscle responses is less understood, hindering the engineering of high-level functional biohybrid systems. Here, we developed NMJ-based biohybrid crawling robots with optogenetic mouse motor neurons, skeletal muscles, 3D-printed hydrogel scaffolds, and integrated onboard wireless micro–light-emitting diode (μLED)–based optoelectronics. We investigated the coupling of the light stimulation and neuromuscular actuation through power spectral density (PSD) analysis. We verified the modulation of the mechanical functionality of the robot depending on the frequency of the optical stimulation to the neural tissue. We demonstrated continued muscle contraction up to 20 minutes after a 1-minute-long pulsed 2-hertz optical stimulation of the neural tissue. Furthermore, the robots were shown to maintain their mechanical functionality for more than 2 weeks. This study provides insights into reliable neuronal control with optoelectronics, supporting advancements in neuronal modulation, biohybrid intelligence, and automation.</div>","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"10 106","pages":""},"PeriodicalIF":27.5,"publicationDate":"2025-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144935569","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-03DOI: 10.1126/scirobotics.ads1204
Matthew Lai, Keegan Go, Zhibin Li, Torsten Kröger, Stefan Schaal, Kelsey Allen, Jonathan Scholz
Modern robotic manufacturing requires collision-free coordination of multiple robots to complete numerous tasks in shared, obstacle-rich workspaces. Although individual tasks may be simple in isolation, automated joint task allocation, scheduling, and motion planning under spatiotemporal constraints remain computationally intractable for classical methods at real-world scales. Existing multiarm systems deployed in industry rely on human intuition and experience to design feasible trajectories manually in a labor-intensive process. To address this challenge, we propose a reinforcement learning (RL) framework to achieve automated task and motion planning, tested in an obstacle-rich environment with eight robots performing 40 reaching tasks in a shared workspace, where any robot can perform any task in any order. Our approach builds on a graph neural network (GNN) policy trained via RL on procedurally generated environments with diverse obstacle layouts, robot configurations, and task distributions. It uses a graph representation of scenes and a graph policy neural network trained through RL to generate trajectories of multiple robots, jointly solving the subproblems of task allocation, scheduling, and motion planning. Trained on large randomly generated task sets in simulation, our policy generalizes zero-shot to unseen settings with varying robot placements, obstacle geometries, and task poses. We further demonstrate that the high-speed capability of our solution enables its use in workcell layout optimization, improving solution times. The speed and scalability of our planner also open the door to capabilities such as fault-tolerant planning and online perception-based replanning, where rapid adaptation to dynamic task sets is required.
{"title":"RoboBallet: Planning for multirobot reaching with graph neural networks and reinforcement learning","authors":"Matthew Lai, Keegan Go, Zhibin Li, Torsten Kröger, Stefan Schaal, Kelsey Allen, Jonathan Scholz","doi":"10.1126/scirobotics.ads1204","DOIUrl":"10.1126/scirobotics.ads1204","url":null,"abstract":"<div >Modern robotic manufacturing requires collision-free coordination of multiple robots to complete numerous tasks in shared, obstacle-rich workspaces. Although individual tasks may be simple in isolation, automated joint task allocation, scheduling, and motion planning under spatiotemporal constraints remain computationally intractable for classical methods at real-world scales. Existing multiarm systems deployed in industry rely on human intuition and experience to design feasible trajectories manually in a labor-intensive process. To address this challenge, we propose a reinforcement learning (RL) framework to achieve automated task and motion planning, tested in an obstacle-rich environment with eight robots performing 40 reaching tasks in a shared workspace, where any robot can perform any task in any order. Our approach builds on a graph neural network (GNN) policy trained via RL on procedurally generated environments with diverse obstacle layouts, robot configurations, and task distributions. It uses a graph representation of scenes and a graph policy neural network trained through RL to generate trajectories of multiple robots, jointly solving the subproblems of task allocation, scheduling, and motion planning. Trained on large randomly generated task sets in simulation, our policy generalizes zero-shot to unseen settings with varying robot placements, obstacle geometries, and task poses. We further demonstrate that the high-speed capability of our solution enables its use in workcell layout optimization, improving solution times. The speed and scalability of our planner also open the door to capabilities such as fault-tolerant planning and online perception-based replanning, where rapid adaptation to dynamic task sets is required.</div>","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"10 106","pages":""},"PeriodicalIF":27.5,"publicationDate":"2025-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144935547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-27DOI: 10.1126/scirobotics.aea7390
Ken Goldberg
{"title":"Good old-fashioned engineering can close the 100,000-year “data gap” in robotics","authors":"Ken Goldberg","doi":"10.1126/scirobotics.aea7390","DOIUrl":"10.1126/scirobotics.aea7390","url":null,"abstract":"","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"10 105","pages":""},"PeriodicalIF":27.5,"publicationDate":"2025-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.science.org/doi/reader/10.1126/scirobotics.aea7390","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144910562","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-27DOI: 10.1126/scirobotics.aea7897
Nancy M. Amato, Seth Hutchinson, Animesh Garg, Aude Billard, Daniela Rus, Russ Tedrake, Frank Park, Ken Goldberg
Leading researchers debate the long-term influence of model-free methods that use large sets of demonstration data to train numerical generative models to control robots.
领先的研究人员争论无模型方法的长期影响,这种方法使用大量的演示数据来训练数值生成模型来控制机器人。
{"title":"“Data will solve robotics and automation: True or false?”: A debate","authors":"Nancy M. Amato, Seth Hutchinson, Animesh Garg, Aude Billard, Daniela Rus, Russ Tedrake, Frank Park, Ken Goldberg","doi":"10.1126/scirobotics.aea7897","DOIUrl":"10.1126/scirobotics.aea7897","url":null,"abstract":"<div >Leading researchers debate the long-term influence of model-free methods that use large sets of demonstration data to train numerical generative models to control robots.</div>","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"10 105","pages":""},"PeriodicalIF":27.5,"publicationDate":"2025-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.science.org/doi/reader/10.1126/scirobotics.aea7897","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144910563","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-27DOI: 10.1126/scirobotics.adu2381
Ronald H. Heisser, Khoi D. Ly, Ofek Peretz, Young S. Kim, Carlos A. Diaz-Ruiz, Rachel M. Miller, Cameron A. Aubin, Sadaf Sobhani, Nikolaos Bouklas, Robert F. Shepherd
High-resolution electronic tactile displays stand to transform haptics for remote machine operation, virtual reality, and digital information access for people who are blind or visually impaired. Yet, increasing the resolution of these displays requires increasing the number of individually addressable actuators while simultaneously reducing their total surface area, power consumption, and weight, challenges most evidently reflected in the dearth of affordable multiline braille displays. Blending principles from soft robotics, microfluidics, and nonlinear mechanics, we introduce a 10-dot–by–10-dot array of 2-millimeter-diameter, combustion-powered, eversible soft actuators that individually rise in 0.24 milliseconds to repeatably produce display patterns. Our rubber architecture is hermetically sealed and demonstrates resistance to liquid and dirt ingress. We demonstrate complete actuation cycles in an untethered tactile display prototype. Our platform technology extends the capabilities of tactile displays to environments that are inaccessible to traditional actuation modalities.
{"title":"Explosion-powered eversible tactile displays","authors":"Ronald H. Heisser, Khoi D. Ly, Ofek Peretz, Young S. Kim, Carlos A. Diaz-Ruiz, Rachel M. Miller, Cameron A. Aubin, Sadaf Sobhani, Nikolaos Bouklas, Robert F. Shepherd","doi":"10.1126/scirobotics.adu2381","DOIUrl":"10.1126/scirobotics.adu2381","url":null,"abstract":"<div >High-resolution electronic tactile displays stand to transform haptics for remote machine operation, virtual reality, and digital information access for people who are blind or visually impaired. Yet, increasing the resolution of these displays requires increasing the number of individually addressable actuators while simultaneously reducing their total surface area, power consumption, and weight, challenges most evidently reflected in the dearth of affordable multiline braille displays. Blending principles from soft robotics, microfluidics, and nonlinear mechanics, we introduce a 10-dot–by–10-dot array of 2-millimeter-diameter, combustion-powered, eversible soft actuators that individually rise in 0.24 milliseconds to repeatably produce display patterns. Our rubber architecture is hermetically sealed and demonstrates resistance to liquid and dirt ingress. We demonstrate complete actuation cycles in an untethered tactile display prototype. Our platform technology extends the capabilities of tactile displays to environments that are inaccessible to traditional actuation modalities.</div>","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"10 105","pages":""},"PeriodicalIF":27.5,"publicationDate":"2025-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144910525","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-27DOI: 10.1126/scirobotics.adv3604
Junzhe He, Chong Zhang, Fabian Jenelten, Ruben Grandia, Moritz Bächer, Marco Hutter
Dynamic locomotion of legged robots is a critical yet challenging topic in expanding the operational range of mobile robots. It requires precise planning when possible footholds are sparse, robustness against uncertainties and disturbances, and generalizability across diverse terrains. Although traditional model-based controllers excel at planning on complex terrains, they struggle with real-world uncertainties. Learning-based controllers offer robustness to such uncertainties but often lack precision on terrains with sparse steppable areas. Hybrid methods achieve enhanced robustness on sparse terrains by combining both methods but are computationally demanding and constrained by the inherent limitations of model-based planners. To achieve generalized legged locomotion on diverse terrains while preserving the robustness of learning-based controllers, this paper proposes an attention-based map encoding conditioned on robot proprioception, which is trained as part of the controller using reinforcement learning. We show that the network learns to focus on steppable areas for future footholds when the robot dynamically navigates diverse and challenging terrains. We synthesized behaviors that exhibited robustness against uncertainties while enabling precise and agile traversal of sparse terrains. In addition, our method offers a way to interpret the topographical perception of a neural network. We have trained two controllers for a 12-degrees-of-freedom quadrupedal robot and a 23-degrees-of-freedom humanoid robot and tested the resulting controllers in the real world under various challenging indoor and outdoor scenarios, including ones unseen during training.
{"title":"Attention-based map encoding for learning generalized legged locomotion","authors":"Junzhe He, Chong Zhang, Fabian Jenelten, Ruben Grandia, Moritz Bächer, Marco Hutter","doi":"10.1126/scirobotics.adv3604","DOIUrl":"10.1126/scirobotics.adv3604","url":null,"abstract":"<div >Dynamic locomotion of legged robots is a critical yet challenging topic in expanding the operational range of mobile robots. It requires precise planning when possible footholds are sparse, robustness against uncertainties and disturbances, and generalizability across diverse terrains. Although traditional model-based controllers excel at planning on complex terrains, they struggle with real-world uncertainties. Learning-based controllers offer robustness to such uncertainties but often lack precision on terrains with sparse steppable areas. Hybrid methods achieve enhanced robustness on sparse terrains by combining both methods but are computationally demanding and constrained by the inherent limitations of model-based planners. To achieve generalized legged locomotion on diverse terrains while preserving the robustness of learning-based controllers, this paper proposes an attention-based map encoding conditioned on robot proprioception, which is trained as part of the controller using reinforcement learning. We show that the network learns to focus on steppable areas for future footholds when the robot dynamically navigates diverse and challenging terrains. We synthesized behaviors that exhibited robustness against uncertainties while enabling precise and agile traversal of sparse terrains. In addition, our method offers a way to interpret the topographical perception of a neural network. We have trained two controllers for a 12-degrees-of-freedom quadrupedal robot and a 23-degrees-of-freedom humanoid robot and tested the resulting controllers in the real world under various challenging indoor and outdoor scenarios, including ones unseen during training.</div>","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"10 105","pages":""},"PeriodicalIF":27.5,"publicationDate":"2025-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144910554","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Animals leverage their full embodiment to achieve multimodal, redundant, and subtle communication. To achieve the same for robots, they must similarly exploit their brain-body-environment interactions or their embodied intelligence. To advance this approach, we propose a framework building on Shannon’s information channel theory for communication to provide the key principles and benchmarks for advancing human-robot communication.
{"title":"Embodied intelligence paradigm for human-robot communication","authors":"Nana Obayashi, Arsen Abdulali, Fumiya Iida, Josie Hughes","doi":"10.1126/scirobotics.ads8528","DOIUrl":"10.1126/scirobotics.ads8528","url":null,"abstract":"<div >Animals leverage their full embodiment to achieve multimodal, redundant, and subtle communication. To achieve the same for robots, they must similarly exploit their brain-body-environment interactions or their embodied intelligence. To advance this approach, we propose a framework building on Shannon’s information channel theory for communication to provide the key principles and benchmarks for advancing human-robot communication.</div>","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"10 105","pages":""},"PeriodicalIF":27.5,"publicationDate":"2025-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144881579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}