Intraocular surgery is challenged by restricted environmental perception and difficulties in instrument depth estimation. The advent of autonomous intraocular surgery represents a milestone in medical technology, given that it can enhance surgical consistency that improves patient safety, shorten surgeon training periods so that more patients can undergo surgery, reduce dependency on human resources, and enable surgeries in remote or extreme environments. In this study, an autonomous robotic system for intraocular surgery (ARISE) was developed, achieving targeted retinal injections throughout the intraocular space. The robotic system achieves intelligent perception and macro/microprecision positioning of the instrument throughout the intraocular space through two key innovations. The first is a multiview spatial fusion that reconciles imaging feature disparities and corrects dynamic spatial misalignments. The second is a criterion-weighted fusion of multisensor data that mitigates inconsistencies in detection range, error magnitude, and sampling frequency. Subretinal and vascular injections were performed on eyeball phantoms, ex vivo porcine eyeballs, and in vivo animal eyeballs. In ex vivo porcine eyeballs, 100% success was achieved for subretinal (n = 20), central retinal vein (CRV) (n = 20), and branch retinal vein (BRV) (n = 20) injections; in in vivo animal eyeballs, 100% success was achieved for subretinal (n = 16), CRV (n = 16), and BRV (n = 16) injections. Compared with manual and teleoperated robotic surgeries, positioning errors were reduced by 79.87 and 54.61%, respectively. These results demonstrate the clinical feasibility of an autonomous intraocular microsurgical robot and its ability to enhance injection precision, safety, and consistency.
{"title":"Autonomous robotic intraocular surgery for targeted retinal injections","authors":"Gui-Bin Bian, Yawen Deng, Zhen Li, Qiang Ye, Yupeng Zhai, Yong Huang, Yingxiong Xie, Weihong Yu, Zhangwanyu Wei, Zhangguo Yu","doi":"10.1126/scirobotics.adx7359","DOIUrl":"10.1126/scirobotics.adx7359","url":null,"abstract":"<div >Intraocular surgery is challenged by restricted environmental perception and difficulties in instrument depth estimation. The advent of autonomous intraocular surgery represents a milestone in medical technology, given that it can enhance surgical consistency that improves patient safety, shorten surgeon training periods so that more patients can undergo surgery, reduce dependency on human resources, and enable surgeries in remote or extreme environments. In this study, an autonomous robotic system for intraocular surgery (ARISE) was developed, achieving targeted retinal injections throughout the intraocular space. The robotic system achieves intelligent perception and macro/microprecision positioning of the instrument throughout the intraocular space through two key innovations. The first is a multiview spatial fusion that reconciles imaging feature disparities and corrects dynamic spatial misalignments. The second is a criterion-weighted fusion of multisensor data that mitigates inconsistencies in detection range, error magnitude, and sampling frequency. Subretinal and vascular injections were performed on eyeball phantoms, ex vivo porcine eyeballs, and in vivo animal eyeballs. In ex vivo porcine eyeballs, 100% success was achieved for subretinal (<i>n</i> = 20), central retinal vein (CRV) (<i>n</i> = 20), and branch retinal vein (BRV) (<i>n</i> = 20) injections; in in vivo animal eyeballs, 100% success was achieved for subretinal (<i>n</i> = 16), CRV (<i>n</i> = 16), and BRV (<i>n</i> = 16) injections. Compared with manual and teleoperated robotic surgeries, positioning errors were reduced by 79.87 and 54.61%, respectively. These results demonstrate the clinical feasibility of an autonomous intraocular microsurgical robot and its ability to enhance injection precision, safety, and consistency.</div>","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"11 110","pages":""},"PeriodicalIF":27.5,"publicationDate":"2026-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145964504","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-24DOI: 10.1126/scirobotics.adl2266
Daniel David, Paul Baxter, Tony Belpaeme, Erik Billing, Haibin Cai, Hoang-Long Cao, Anamaria Ciocan, Cristina Costescu, Daniel Hernandez Garcia, Pablo Gómez Esteban, James Kennedy, Honghai Liu, Silviu Matu, Alexandre Mazel, Mihaela Selescu, Emmanuel Senft, Serge Thill, Bram Vanderborght, David Vernon, Tom Ziemke
The use of social robots in therapy for children with autism has been explored for more than 20 years, but there still is limited clinical evidence. The work presented here provides a systematic approach to evaluating both efficacy and effectiveness, bridging the gap between theory and practice by targeting joint attention, imitation, and turn-taking as core developmental mechanisms that can make a difference in autism interventions. We present two randomized clinical trials with different robot-assisted therapy implementations aimed at young children. The first is an efficacy trial (n = 69; mean age = 4.4 years) showing that 12 biweekly sessions of in-clinic robot-assisted therapy achieve equivalent outcomes to conventional treatment but with a significant increase in the patients’ engagement. The second trial (n = 63; mean age = 5.9 years) evaluates the effectiveness in real-world settings by substituting the clinical setup with a simpler one for use in schools or homes. Over the course of a modest dosage of five sessions, we show equivalent outcomes to standard treatment. Both efficacy and effectiveness trials lend further credibility to the beneficial role that social robots can play in autism therapy while also highlighting the potential advantages of portable and cost-effective setups.
{"title":"Efficacy and effectiveness of robot-assisted therapy for autism spectrum disorder: From lab to reality","authors":"Daniel David, Paul Baxter, Tony Belpaeme, Erik Billing, Haibin Cai, Hoang-Long Cao, Anamaria Ciocan, Cristina Costescu, Daniel Hernandez Garcia, Pablo Gómez Esteban, James Kennedy, Honghai Liu, Silviu Matu, Alexandre Mazel, Mihaela Selescu, Emmanuel Senft, Serge Thill, Bram Vanderborght, David Vernon, Tom Ziemke","doi":"10.1126/scirobotics.adl2266","DOIUrl":"10.1126/scirobotics.adl2266","url":null,"abstract":"<div >The use of social robots in therapy for children with autism has been explored for more than 20 years, but there still is limited clinical evidence. The work presented here provides a systematic approach to evaluating both efficacy and effectiveness, bridging the gap between theory and practice by targeting joint attention, imitation, and turn-taking as core developmental mechanisms that can make a difference in autism interventions. We present two randomized clinical trials with different robot-assisted therapy implementations aimed at young children. The first is an efficacy trial (<i>n</i> = 69; mean age = 4.4 years) showing that 12 biweekly sessions of in-clinic robot-assisted therapy achieve equivalent outcomes to conventional treatment but with a significant increase in the patients’ engagement. The second trial (<i>n</i> = 63; mean age = 5.9 years) evaluates the effectiveness in real-world settings by substituting the clinical setup with a simpler one for use in schools or homes. Over the course of a modest dosage of five sessions, we show equivalent outcomes to standard treatment. Both efficacy and effectiveness trials lend further credibility to the beneficial role that social robots can play in autism therapy while also highlighting the potential advantages of portable and cost-effective setups.</div>","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"10 109","pages":""},"PeriodicalIF":27.5,"publicationDate":"2025-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145813724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-24DOI: 10.1126/scirobotics.aee0315
Robin R. Murphy
In The Downloaded, a robot cripples a roboticist for promoting Asimov’s three laws of robotics.
在《下载》中,一个机器人专家因为推广阿西莫夫的机器人三定律而致残。
{"title":"Robots do not like Asimov’s three laws","authors":"Robin R. Murphy","doi":"10.1126/scirobotics.aee0315","DOIUrl":"10.1126/scirobotics.aee0315","url":null,"abstract":"<div >In <i>The Downloaded</i>, a robot cripples a roboticist for promoting Asimov’s three laws of robotics.</div>","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"10 109","pages":""},"PeriodicalIF":27.5,"publicationDate":"2025-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145823993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-17DOI: 10.1126/scirobotics.adz6787
Robert D. Howe, Zixi Liu
The reliability of manipulation in unstructured environments is unknown, but 1 in 10,000 dropped items may be acceptable.
在非结构化环境中操作的可靠性是未知的,但1万分之一的掉落物品可能是可以接受的。
{"title":"How reliable is robotic manipulation in the real world?","authors":"Robert D. Howe, Zixi Liu","doi":"10.1126/scirobotics.adz6787","DOIUrl":"10.1126/scirobotics.adz6787","url":null,"abstract":"<div >The reliability of manipulation in unstructured environments is unknown, but 1 in 10,000 dropped items may be acceptable.</div>","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"10 109","pages":""},"PeriodicalIF":27.5,"publicationDate":"2025-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145771574","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-17DOI: 10.1126/scirobotics.adx2549
Seong-Bin Lee, Namsuk Cho, Geonho Lee, Seungju Lee, Junseo Kim, Gyujin Shim, Jong Tai Jang, Se Kwon Kim, TaeWon Seo, Chae Kyung Sim, Dae-Young Lee
Lunar pits and lava tubes hold promise for future human habitation, offering natural protection and stable environments. However, exploring these sites entails challenging terrain, including steep slopes along cave funnels and vertical cliffs. Here, we present a soft, deployable airless wheel to address these challenges. By achieving a high deployment ratio, multiple rovers can be stowed efficiently without sacrificing mobility, thereby improving mission reliability and flexibility. The proposed wheel incorporates a reconfigurable reciprocal structure of elastic steel strips arranged in a woven helical pattern, enabling shape transformations while preserving load-bearing capacity. This reciprocal arrangement also allows for safe vertical descents and mitigates damage from accidental falls in caves. By distributing strain throughout the wheel’s body, reliance on delicate mechanical components is minimized—a critical advantage under extreme lunar conditions. The wheel can be stowed at a diameter of 230 millimeters and deployed to 500 millimeters. Experimental results show successful traversal of 200-millimeter obstacles, stable mobility on rocky and lunar soil simulant surfaces, and resilience to drop impacts simulating a 100-meter descent under lunar gravity. These findings underscore the wheel’s suitability for future pit and cave exploration, even in harsh lunar environments.
{"title":"Soft deployable airless wheel for lunar lava tube intact exploration","authors":"Seong-Bin Lee, Namsuk Cho, Geonho Lee, Seungju Lee, Junseo Kim, Gyujin Shim, Jong Tai Jang, Se Kwon Kim, TaeWon Seo, Chae Kyung Sim, Dae-Young Lee","doi":"10.1126/scirobotics.adx2549","DOIUrl":"10.1126/scirobotics.adx2549","url":null,"abstract":"<div >Lunar pits and lava tubes hold promise for future human habitation, offering natural protection and stable environments. However, exploring these sites entails challenging terrain, including steep slopes along cave funnels and vertical cliffs. Here, we present a soft, deployable airless wheel to address these challenges. By achieving a high deployment ratio, multiple rovers can be stowed efficiently without sacrificing mobility, thereby improving mission reliability and flexibility. The proposed wheel incorporates a reconfigurable reciprocal structure of elastic steel strips arranged in a woven helical pattern, enabling shape transformations while preserving load-bearing capacity. This reciprocal arrangement also allows for safe vertical descents and mitigates damage from accidental falls in caves. By distributing strain throughout the wheel’s body, reliance on delicate mechanical components is minimized—a critical advantage under extreme lunar conditions. The wheel can be stowed at a diameter of 230 millimeters and deployed to 500 millimeters. Experimental results show successful traversal of 200-millimeter obstacles, stable mobility on rocky and lunar soil simulant surfaces, and resilience to drop impacts simulating a 100-meter descent under lunar gravity. These findings underscore the wheel’s suitability for future pit and cave exploration, even in harsh lunar environments.</div>","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"10 109","pages":""},"PeriodicalIF":27.5,"publicationDate":"2025-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145765537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-17DOI: 10.1126/scirobotics.aee5779
Melisa Yashinski
A model trained by observing human-human interactions produces more natural robot behavior during human-robot interaction.
通过观察人机交互训练的模型在人机交互过程中产生更自然的机器人行为。
{"title":"Learning robot behavior from human-human interactions","authors":"Melisa Yashinski","doi":"10.1126/scirobotics.aee5779","DOIUrl":"10.1126/scirobotics.aee5779","url":null,"abstract":"<div >A model trained by observing human-human interactions produces more natural robot behavior during human-robot interaction.</div>","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"10 109","pages":""},"PeriodicalIF":27.5,"publicationDate":"2025-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145771097","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-17DOI: 10.1126/scirobotics.adw2969
Peiyao Zhang, Peter Gehlbach, Russell H. Taylor, Iulian Iordachita, Marin Kobilarov
Retinal vein cannulation (RVC) is an emerging method for treating retinal vein occlusion (RVO). The success of this procedure depends on surgeon expertise and, recently, robotic assistance. This paper proposes an autonomous RVC workflow leveraging deep learning and computer vision. Two Steady-Hand Eye Robots (SHERs) controlled a 100-micrometer metal needle and a medical spatula to execute precise tasks. Three convolutional neural networks were trained to predict needle movement direction and identify contact and puncture events. A surgical microscope with an intraoperative optical coherence tomography (iOCT) system captured the surgical field through a microscope and cross-sectional images (B-scans). The goal was to enable the robot to autonomously carry out the critical steps of the RVC procedure, especially those that are challenging and require expert knowledge. The less technically demanding tasks were assigned to the user, who also supervised the robot during these steps. Our method was tested on 20 ex vivo porcine eyes, achieving a success rate of 90%. In addition, we simulated eye movements caused by breathing on six other ex vivo porcine eyes. With the eyes moving in a sinusoidal pattern, we achieved a success rate of 83%, demonstrating the robustness and stability of the proposed workflow. Our results demonstrate that the autonomous RVC workflow, incorporating deep learning and robotic assistance, achieves high success rates in both static and dynamic conditions, indicating its potential to enhance the precision and reliability of RVO treatment.
{"title":"Deep learning–based autonomous retinal vein cannulation in ex vivo porcine eyes","authors":"Peiyao Zhang, Peter Gehlbach, Russell H. Taylor, Iulian Iordachita, Marin Kobilarov","doi":"10.1126/scirobotics.adw2969","DOIUrl":"10.1126/scirobotics.adw2969","url":null,"abstract":"<div >Retinal vein cannulation (RVC) is an emerging method for treating retinal vein occlusion (RVO). The success of this procedure depends on surgeon expertise and, recently, robotic assistance. This paper proposes an autonomous RVC workflow leveraging deep learning and computer vision. Two Steady-Hand Eye Robots (SHERs) controlled a 100-micrometer metal needle and a medical spatula to execute precise tasks. Three convolutional neural networks were trained to predict needle movement direction and identify contact and puncture events. A surgical microscope with an intraoperative optical coherence tomography (iOCT) system captured the surgical field through a microscope and cross-sectional images (B-scans). The goal was to enable the robot to autonomously carry out the critical steps of the RVC procedure, especially those that are challenging and require expert knowledge. The less technically demanding tasks were assigned to the user, who also supervised the robot during these steps. Our method was tested on 20 ex vivo porcine eyes, achieving a success rate of 90%. In addition, we simulated eye movements caused by breathing on six other ex vivo porcine eyes. With the eyes moving in a sinusoidal pattern, we achieved a success rate of 83%, demonstrating the robustness and stability of the proposed workflow. Our results demonstrate that the autonomous RVC workflow, incorporating deep learning and robotic assistance, achieves high success rates in both static and dynamic conditions, indicating its potential to enhance the precision and reliability of RVO treatment.</div>","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"10 109","pages":""},"PeriodicalIF":27.5,"publicationDate":"2025-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145765566","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-10DOI: 10.1126/scirobotics.adv1818
Shibo Zhao, Sifan Zhou, Yuchen Zhang, Ji Zhang, Chen Wang, Wenshan Wang, Sebastian Scherer
Resilient and robust odometry is crucial for autonomous systems operating in complex and dynamic environments. Existing odometry systems often struggle with severe sensory degradations and extreme conditions such as smoke, sandstorms, snow, or low-light conditions, threatening both the safety and functionality of robots. To address these challenges, we present Super Odometry, a sensor fusion framework that dynamically adapts to varying levels of environmental degradation. Super Odometry uses a hierarchical structure to integrate four core modules from lower-level to higher-level adaptability, including adaptive feature selection, adaptive state direction selection, adaptive engine selection, and a learning-based inertial odometry. The inertial odometry, trained on more than 100 hours of heterogeneous robotic platforms, captures comprehensive motion dynamics. Super Odometry elevates the inertial measurement unit to equal importance with camera and light detection and ranging (LiDAR) systems in the sensor fusion framework, providing a reliable fallback when exteroceptive sensors fail. Super Odometry has been validated across 200 kilometers and 800 operational hours on a fleet of aerial, wheeled, and legged robots and under diverse sensor configurations, environmental degradation, and aggressive motion profiles. It marks an important step toward safe and long-term robotic autonomy in all-degraded environments.
{"title":"Resilient odometry via hierarchical adaptation","authors":"Shibo Zhao, Sifan Zhou, Yuchen Zhang, Ji Zhang, Chen Wang, Wenshan Wang, Sebastian Scherer","doi":"10.1126/scirobotics.adv1818","DOIUrl":"10.1126/scirobotics.adv1818","url":null,"abstract":"<div >Resilient and robust odometry is crucial for autonomous systems operating in complex and dynamic environments. Existing odometry systems often struggle with severe sensory degradations and extreme conditions such as smoke, sandstorms, snow, or low-light conditions, threatening both the safety and functionality of robots. To address these challenges, we present Super Odometry, a sensor fusion framework that dynamically adapts to varying levels of environmental degradation. Super Odometry uses a hierarchical structure to integrate four core modules from lower-level to higher-level adaptability, including adaptive feature selection, adaptive state direction selection, adaptive engine selection, and a learning-based inertial odometry. The inertial odometry, trained on more than 100 hours of heterogeneous robotic platforms, captures comprehensive motion dynamics. Super Odometry elevates the inertial measurement unit to equal importance with camera and light detection and ranging (LiDAR) systems in the sensor fusion framework, providing a reliable fallback when exteroceptive sensors fail. Super Odometry has been validated across 200 kilometers and 800 operational hours on a fleet of aerial, wheeled, and legged robots and under diverse sensor configurations, environmental degradation, and aggressive motion profiles. It marks an important step toward safe and long-term robotic autonomy in all-degraded environments.</div>","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"10 109","pages":""},"PeriodicalIF":27.5,"publicationDate":"2025-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145711160","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-10DOI: 10.1126/scirobotics.adu8009
Maya M. Lassiter, Jungho Lee, Kyle Skelil, Li Xu, Lucas Hanson, William H. Reinhardt, Dennis Sylvester, Mark Yim, David Blaauw, Marc Z. Miskin
Although miniaturization has been a goal in robotics for nearly 40 years, roboticists have struggled to access submillimeter dimensions without making sacrifices to onboard information processing because of the unique physics of the microscale. Consequently, microrobots often lack the key features that distinguish their macroscopic cousins from other machines, namely, on-robot systems for decision-making, sensing, feedback, and programmable computation. Here, we take up the challenge of building a robot comparable in size to a single-celled paramecium that can sense, think, and act using onboard systems for computation, sensing, memory, locomotion, and communication. Built massively in parallel with fully lithographic processing, these microrobots can execute digitally defined algorithms and autonomously change behavior in response to their surroundings. Combined, these results pave the way for general-purpose microrobots that can be programmed many times in a simple setup and can work together to carry out tasks without supervision in uncertain environments.
{"title":"Microscopic robots that sense, think, act, and compute","authors":"Maya M. Lassiter, Jungho Lee, Kyle Skelil, Li Xu, Lucas Hanson, William H. Reinhardt, Dennis Sylvester, Mark Yim, David Blaauw, Marc Z. Miskin","doi":"10.1126/scirobotics.adu8009","DOIUrl":"10.1126/scirobotics.adu8009","url":null,"abstract":"<div >Although miniaturization has been a goal in robotics for nearly 40 years, roboticists have struggled to access submillimeter dimensions without making sacrifices to onboard information processing because of the unique physics of the microscale. Consequently, microrobots often lack the key features that distinguish their macroscopic cousins from other machines, namely, on-robot systems for decision-making, sensing, feedback, and programmable computation. Here, we take up the challenge of building a robot comparable in size to a single-celled paramecium that can sense, think, and act using onboard systems for computation, sensing, memory, locomotion, and communication. Built massively in parallel with fully lithographic processing, these microrobots can execute digitally defined algorithms and autonomously change behavior in response to their surroundings. Combined, these results pave the way for general-purpose microrobots that can be programmed many times in a simple setup and can work together to carry out tasks without supervision in uncertain environments.</div>","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"10 109","pages":""},"PeriodicalIF":27.5,"publicationDate":"2025-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.science.org/doi/reader/10.1126/scirobotics.adu8009","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145711159","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}