Pub Date : 2025-12-10DOI: 10.1126/scirobotics.aed6461
Donato Romano
Jumping biorobots merge insect agility with energy harvesting, enabling new mobility and long-term autonomous operation.
跳跃生物机器人将昆虫的敏捷性与能量收集结合在一起,实现了新的移动性和长期自主操作。
{"title":"Jump, recharge, repeat: Insect-inspired jumping robots and the challenge of energy harvesting","authors":"Donato Romano","doi":"10.1126/scirobotics.aed6461","DOIUrl":"10.1126/scirobotics.aed6461","url":null,"abstract":"<div >Jumping biorobots merge insect agility with energy harvesting, enabling new mobility and long-term autonomous operation.</div>","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"10 109","pages":""},"PeriodicalIF":27.5,"publicationDate":"2025-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145717353","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-26DOI: 10.1126/scirobotics.aec3393
Patrick Pfreundschuh, Giovanni Cioffi, Cornelius von Einem, Alexander Wyss, Hans Wernher van de Venn, Cesar Cadena, Davide Scaramuzza, Roland Siegwart, Alireza Darvishy, Lukas Hendry
A mobile-robotics–based navigation and perception system guided a visually impaired pilot through complex tasks at Cybathlon.
在Cybathlon上,一个基于移动机器人的导航和感知系统引导视力受损的飞行员完成复杂的任务。
{"title":"Sight Guide demonstrates robotics-inspired vision assistance at the Cybathlon","authors":"Patrick Pfreundschuh, Giovanni Cioffi, Cornelius von Einem, Alexander Wyss, Hans Wernher van de Venn, Cesar Cadena, Davide Scaramuzza, Roland Siegwart, Alireza Darvishy, Lukas Hendry","doi":"10.1126/scirobotics.aec3393","DOIUrl":"10.1126/scirobotics.aec3393","url":null,"abstract":"<div >A mobile-robotics–based navigation and perception system guided a visually impaired pilot through complex tasks at Cybathlon.</div>","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"10 108","pages":""},"PeriodicalIF":27.5,"publicationDate":"2025-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145609959","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-26DOI: 10.1126/scirobotics.adv4696
Sun-Pill Jung, Jaeyoung Song, Chan Kim, Haemin Lee, Inchul Jeong, Jongmin Kim, Kyu-Jin Cho
Extendable structures often use rollable designs, with long, flexible materials that can be wound onto a hub for storage without the need for joints. However, achieving high stiffness and strength in the extended state while keeping the hub compact is challenging, given that stiff structures are difficult to bend and typically require larger hubs for storage. Here, we introduce a corrugated sheet–shaped foldable design that enables Z-folding by connecting multiple strips in parallel. The unfolded, corrugated form structure offers a high load-bearing capacity, and the folded, stacked form structure can be smoothly rolled onto a hub, enabling fold-and-roll storage. The key innovation is the formation of an interlaced origami structure by connecting strips through a ribbon-weaving technique. This interlacing design enables both localized flexibility and mutual constraints between strips: The localized flexibility accommodates perimeter differences between stacked strips during rolling, and the densely repeated mutual constraints make the corrugation resist excessive deformation under external forces. Using these structures, we made two deployable mobile robots: one with a 1.6-meter deployable arm for shelving tasks and another with a tetrahedral deployable frame that supported a meter-scale 3D-printing system. Our results showcase the potential of this interlaced, corrugated approach for deployable robotic systems requiring both compactness and strength.
{"title":"Foldable and rollable interlaced structure for deployable robotic systems","authors":"Sun-Pill Jung, Jaeyoung Song, Chan Kim, Haemin Lee, Inchul Jeong, Jongmin Kim, Kyu-Jin Cho","doi":"10.1126/scirobotics.adv4696","DOIUrl":"10.1126/scirobotics.adv4696","url":null,"abstract":"<div >Extendable structures often use rollable designs, with long, flexible materials that can be wound onto a hub for storage without the need for joints. However, achieving high stiffness and strength in the extended state while keeping the hub compact is challenging, given that stiff structures are difficult to bend and typically require larger hubs for storage. Here, we introduce a corrugated sheet–shaped foldable design that enables Z-folding by connecting multiple strips in parallel. The unfolded, corrugated form structure offers a high load-bearing capacity, and the folded, stacked form structure can be smoothly rolled onto a hub, enabling fold-and-roll storage. The key innovation is the formation of an interlaced origami structure by connecting strips through a ribbon-weaving technique. This interlacing design enables both localized flexibility and mutual constraints between strips: The localized flexibility accommodates perimeter differences between stacked strips during rolling, and the densely repeated mutual constraints make the corrugation resist excessive deformation under external forces. Using these structures, we made two deployable mobile robots: one with a 1.6-meter deployable arm for shelving tasks and another with a tetrahedral deployable frame that supported a meter-scale 3D-printing system. Our results showcase the potential of this interlaced, corrugated approach for deployable robotic systems requiring both compactness and strength.</div>","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"10 108","pages":""},"PeriodicalIF":27.5,"publicationDate":"2025-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.science.org/doi/reader/10.1126/scirobotics.adv4696","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145600187","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-26DOI: 10.1126/scirobotics.adv0496
Paul Belzner, Patrick A. Forbes, Calvin Kuo, Jean-Sébastien Blouin
Effective control of bipedal postures relies on sensory inputs from the past, which encode dynamic changes in the spatial properties of our movement over time. To uncover how the spatial and temporal properties of an upright posture interact in the perception and control of standing balance, we implemented a robotic virtualization of human body dynamics to systematically alter inertia and viscosity as well as sensorimotor delays in 20 healthy participants. Inertia gains below one or negative viscosity gains led to larger postural oscillations and caused participants to exceed virtual balance limits, mimicking the disruptive effects of an additional 200-millisecond sensorimotor delay. When balancing without delays, participants adjusted their inertia gains to below one and viscosity gains to negative values to match the perception of balancing with an imposed delay. When delays were present, participants increased inertia gains above one and used positive viscosity gains to align their perception with baseline balance. Building on these findings, 10 naïve participants exhibited improved balance stability and reduced the number of instances they exceeded the limits when balancing with a 200-millisecond delay compensated by inertia gains above one and positive viscosity gains. These results underscore the importance of innovative robotic virtualizations of standing balance to reveal the interconnected representations of space and time that underlie the stable perception and control of bipedal balance. Robotic manipulation of body physics offers a transformative approach to understanding how the nervous system processes spatial information over time and could address clinical sensorimotor deficits associated with delays.
{"title":"Robotic manipulation of human bipedalism reveals overlapping internal representations of space and time","authors":"Paul Belzner, Patrick A. Forbes, Calvin Kuo, Jean-Sébastien Blouin","doi":"10.1126/scirobotics.adv0496","DOIUrl":"10.1126/scirobotics.adv0496","url":null,"abstract":"<div >Effective control of bipedal postures relies on sensory inputs from the past, which encode dynamic changes in the spatial properties of our movement over time. To uncover how the spatial and temporal properties of an upright posture interact in the perception and control of standing balance, we implemented a robotic virtualization of human body dynamics to systematically alter inertia and viscosity as well as sensorimotor delays in 20 healthy participants. Inertia gains below one or negative viscosity gains led to larger postural oscillations and caused participants to exceed virtual balance limits, mimicking the disruptive effects of an additional 200-millisecond sensorimotor delay. When balancing without delays, participants adjusted their inertia gains to below one and viscosity gains to negative values to match the perception of balancing with an imposed delay. When delays were present, participants increased inertia gains above one and used positive viscosity gains to align their perception with baseline balance. Building on these findings, 10 naïve participants exhibited improved balance stability and reduced the number of instances they exceeded the limits when balancing with a 200-millisecond delay compensated by inertia gains above one and positive viscosity gains. These results underscore the importance of innovative robotic virtualizations of standing balance to reveal the interconnected representations of space and time that underlie the stable perception and control of bipedal balance. Robotic manipulation of body physics offers a transformative approach to understanding how the nervous system processes spatial information over time and could address clinical sensorimotor deficits associated with delays.</div>","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"10 108","pages":""},"PeriodicalIF":27.5,"publicationDate":"2025-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145600188","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-19DOI: 10.1126/scirobotics.aed6762
Melisa Yashinski
Engineered tomato plants produced flowers with visible stigmas that a robot could detect and pollinate faster than a human.
经过基因改造的番茄植株结出了带有可见柱头的花朵,机器人可以比人类更快地检测到这些花朵并进行授粉。
{"title":"Robotic cross-pollination of genetically modified flowers","authors":"Melisa Yashinski","doi":"10.1126/scirobotics.aed6762","DOIUrl":"10.1126/scirobotics.aed6762","url":null,"abstract":"<div >Engineered tomato plants produced flowers with visible stigmas that a robot could detect and pollinate faster than a human.</div>","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"10 108","pages":""},"PeriodicalIF":27.5,"publicationDate":"2025-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145554727","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-19DOI: 10.1126/scirobotics.aec6029
{"title":"Erratum for the Research Article “A lightweight robotic leg prosthesis replicating the biomechanics of the knee, ankle, and toe joint” by M. Tran et al.","authors":"","doi":"10.1126/scirobotics.aec6029","DOIUrl":"10.1126/scirobotics.aec6029","url":null,"abstract":"","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"10 108","pages":""},"PeriodicalIF":27.5,"publicationDate":"2025-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145554722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-19DOI: 10.1126/scirobotics.aed1537
Robin R. Murphy
Robot assistants in Superman and The Fantastic Four: First Steps may not save the world, but they fulfill six different jobs.
在《超人》和《神奇四侠:第一步》中,机器人助手可能不会拯救世界,但他们可以完成六项不同的工作。
{"title":"The robots in Superman and The Fantastic Four: First Steps are as amazing as the superheroes","authors":"Robin R. Murphy","doi":"10.1126/scirobotics.aed1537","DOIUrl":"10.1126/scirobotics.aed1537","url":null,"abstract":"<div >Robot assistants in <i>Superman</i> and <i>The Fantastic Four: First Steps</i> may not save the world, but they fulfill six different jobs.</div>","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"10 108","pages":""},"PeriodicalIF":27.5,"publicationDate":"2025-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145545473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mechanical metamaterials with customized microstructures are increasingly shaping robotic design and functionality, enabling the integration of sensing, actuation, control, and computation within the robot body. This Review outlines how metamaterial design principles—mechanics-inspired architectures, shape-reconfigurable structures, and material-driven functionality—enhance adaptability and distributed intelligence in robotics. We also discuss how artificial intelligence supports metamaterial robotics in design, modeling, and control, advancing systems with complex sensory feedback, learning capability, and adaptive physical interactions. This Review aims to inspire the community to explore the transformative potential of metamaterial robotics, fostering innovations that bridge the gap between materials engineering and intelligent robotics.
{"title":"Metamaterial robotics","authors":"Xiaoyang Zheng, Yuhao Jiang, Mustafa Mete, Jingjing Li, Ikumu Watanabe, Takayuki Yamada, Jamie Paik","doi":"10.1126/scirobotics.adx1519","DOIUrl":"10.1126/scirobotics.adx1519","url":null,"abstract":"<div >Mechanical metamaterials with customized microstructures are increasingly shaping robotic design and functionality, enabling the integration of sensing, actuation, control, and computation within the robot body. This Review outlines how metamaterial design principles—mechanics-inspired architectures, shape-reconfigurable structures, and material-driven functionality—enhance adaptability and distributed intelligence in robotics. We also discuss how artificial intelligence supports metamaterial robotics in design, modeling, and control, advancing systems with complex sensory feedback, learning capability, and adaptive physical interactions. This Review aims to inspire the community to explore the transformative potential of metamaterial robotics, fostering innovations that bridge the gap between materials engineering and intelligent robotics.</div>","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"10 108","pages":""},"PeriodicalIF":27.5,"publicationDate":"2025-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145545474","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-19DOI: 10.1126/scirobotics.ads8652
Keaton L. Scherpereel, Matthew C. Gombolay, Max K. Shepherd, Carlos A. Carrasquillo, Omer T. Inan, Aaron J. Young
Data-driven methods have transformed our ability to assess and respond to human movement with wearable robots, promising real-world rehabilitation and augmentation benefits. However, the proliferation of data-driven methods, with the associated demand for increased personalization and performance, requires vast quantities of high-quality, device-specific data. Procuring these data is often intractable because of resource and personnel costs. We propose a framework that overcomes data scarcity by leveraging simulated sensors from biomechanical models to form a stepping-stone domain through which easily accessible data can be translated into data-limited domains. We developed and optimized a deep domain adaptation network that replaces costly, device-specific, labeled data with open-source datasets and unlabeled exoskeleton data. Using our network, we trained a hip and knee joint moment estimator with performance comparable to a best-case model trained with a complete, device-specific dataset [incurring only an 11 to 20%, 0.019 to 0.028 newton-meters per kilogram (Nm/kg) increase in error for a semisupervised model and 20 to 44%, 0.033 to 0.062 Nm/kg for an unsupervised model]. Our network significantly outperformed counterpart networks without domain adaptation (which incurred errors of 36 to 45% semisupervised and 50 to 60% unsupervised). Deploying our models in the real-time control loop of a hip/knee exoskeleton (N = 8) demonstrated estimator performance similar to offline results while augmenting user performance based on those estimated moments (9.5 to 14.6% metabolic cost reductions compared with no exoskeleton). Our framework enables researchers to train real-time deployable deep learning, task-agnostic models with limited or no access to labeled, device-specific data.
{"title":"Deep domain adaptation eliminates costly data required for task-agnostic wearable robotic control","authors":"Keaton L. Scherpereel, Matthew C. Gombolay, Max K. Shepherd, Carlos A. Carrasquillo, Omer T. Inan, Aaron J. Young","doi":"10.1126/scirobotics.ads8652","DOIUrl":"10.1126/scirobotics.ads8652","url":null,"abstract":"<div >Data-driven methods have transformed our ability to assess and respond to human movement with wearable robots, promising real-world rehabilitation and augmentation benefits. However, the proliferation of data-driven methods, with the associated demand for increased personalization and performance, requires vast quantities of high-quality, device-specific data. Procuring these data is often intractable because of resource and personnel costs. We propose a framework that overcomes data scarcity by leveraging simulated sensors from biomechanical models to form a stepping-stone domain through which easily accessible data can be translated into data-limited domains. We developed and optimized a deep domain adaptation network that replaces costly, device-specific, labeled data with open-source datasets and unlabeled exoskeleton data. Using our network, we trained a hip and knee joint moment estimator with performance comparable to a best-case model trained with a complete, device-specific dataset [incurring only an 11 to 20%, 0.019 to 0.028 newton-meters per kilogram (Nm/kg) increase in error for a semisupervised model and 20 to 44%, 0.033 to 0.062 Nm/kg for an unsupervised model]. Our network significantly outperformed counterpart networks without domain adaptation (which incurred errors of 36 to 45% semisupervised and 50 to 60% unsupervised). Deploying our models in the real-time control loop of a hip/knee exoskeleton (<i>N</i> = 8) demonstrated estimator performance similar to offline results while augmenting user performance based on those estimated moments (9.5 to 14.6% metabolic cost reductions compared with no exoskeleton). Our framework enables researchers to train real-time deployable deep learning, task-agnostic models with limited or no access to labeled, device-specific data.</div>","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"10 108","pages":""},"PeriodicalIF":27.5,"publicationDate":"2025-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145545475","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}