首页 > 最新文献

Science Robotics最新文献

英文 中文
Autonomous robotic intraocular surgery for targeted retinal injections 靶向视网膜注射的自主机器人眼内手术
IF 27.5 1区 计算机科学 Q1 ROBOTICS Pub Date : 2026-01-14 DOI: 10.1126/scirobotics.adx7359
Gui-Bin Bian, Yawen Deng, Zhen Li, Qiang Ye, Yupeng Zhai, Yong Huang, Yingxiong Xie, Weihong Yu, Zhangwanyu Wei, Zhangguo Yu
Intraocular surgery is challenged by restricted environmental perception and difficulties in instrument depth estimation. The advent of autonomous intraocular surgery represents a milestone in medical technology, given that it can enhance surgical consistency that improves patient safety, shorten surgeon training periods so that more patients can undergo surgery, reduce dependency on human resources, and enable surgeries in remote or extreme environments. In this study, an autonomous robotic system for intraocular surgery (ARISE) was developed, achieving targeted retinal injections throughout the intraocular space. The robotic system achieves intelligent perception and macro/microprecision positioning of the instrument throughout the intraocular space through two key innovations. The first is a multiview spatial fusion that reconciles imaging feature disparities and corrects dynamic spatial misalignments. The second is a criterion-weighted fusion of multisensor data that mitigates inconsistencies in detection range, error magnitude, and sampling frequency. Subretinal and vascular injections were performed on eyeball phantoms, ex vivo porcine eyeballs, and in vivo animal eyeballs. In ex vivo porcine eyeballs, 100% success was achieved for subretinal (n = 20), central retinal vein (CRV) (n = 20), and branch retinal vein (BRV) (n = 20) injections; in in vivo animal eyeballs, 100% success was achieved for subretinal (n = 16), CRV (n = 16), and BRV (n = 16) injections. Compared with manual and teleoperated robotic surgeries, positioning errors were reduced by 79.87 and 54.61%, respectively. These results demonstrate the clinical feasibility of an autonomous intraocular microsurgical robot and its ability to enhance injection precision, safety, and consistency.
眼内手术受到环境感知受限和仪器深度估计困难的挑战。自主眼内手术的出现是医疗技术的一个里程碑,因为它可以提高手术的一致性,提高患者的安全性,缩短外科医生的培训周期,使更多的患者可以接受手术,减少对人力资源的依赖,并使手术能够在偏远或极端环境中进行。在本研究中,开发了一种用于眼内手术的自主机器人系统(ARISE),实现了整个眼内腔的靶向视网膜注射。机器人系统通过两个关键创新实现了整个眼内空间的智能感知和宏/微精密定位。第一种是多视图空间融合,调和成像特征差异并纠正动态空间失调。第二种是多传感器数据的标准加权融合,减轻了检测范围、误差幅度和采样频率的不一致性。对眼球幻影、离体猪眼球和活体动物眼球进行视网膜下和血管注射。在离体猪眼球中,视网膜下静脉(n = 20)、视网膜中央静脉(n = 20)和视网膜分支静脉(n = 20)注射成功率为100%;在活体动物眼球中,视网膜下(n = 16)、CRV (n = 16)和BRV (n = 16)注射的成功率为100%。与手动和远程机器人手术相比,定位误差分别降低了79.87%和54.61%。这些结果证明了自主眼内显微手术机器人的临床可行性及其提高注射精度、安全性和一致性的能力。
{"title":"Autonomous robotic intraocular surgery for targeted retinal injections","authors":"Gui-Bin Bian,&nbsp;Yawen Deng,&nbsp;Zhen Li,&nbsp;Qiang Ye,&nbsp;Yupeng Zhai,&nbsp;Yong Huang,&nbsp;Yingxiong Xie,&nbsp;Weihong Yu,&nbsp;Zhangwanyu Wei,&nbsp;Zhangguo Yu","doi":"10.1126/scirobotics.adx7359","DOIUrl":"10.1126/scirobotics.adx7359","url":null,"abstract":"<div >Intraocular surgery is challenged by restricted environmental perception and difficulties in instrument depth estimation. The advent of autonomous intraocular surgery represents a milestone in medical technology, given that it can enhance surgical consistency that improves patient safety, shorten surgeon training periods so that more patients can undergo surgery, reduce dependency on human resources, and enable surgeries in remote or extreme environments. In this study, an autonomous robotic system for intraocular surgery (ARISE) was developed, achieving targeted retinal injections throughout the intraocular space. The robotic system achieves intelligent perception and macro/microprecision positioning of the instrument throughout the intraocular space through two key innovations. The first is a multiview spatial fusion that reconciles imaging feature disparities and corrects dynamic spatial misalignments. The second is a criterion-weighted fusion of multisensor data that mitigates inconsistencies in detection range, error magnitude, and sampling frequency. Subretinal and vascular injections were performed on eyeball phantoms, ex vivo porcine eyeballs, and in vivo animal eyeballs. In ex vivo porcine eyeballs, 100% success was achieved for subretinal (<i>n</i> = 20), central retinal vein (CRV) (<i>n</i> = 20), and branch retinal vein (BRV) (<i>n</i> = 20) injections; in in vivo animal eyeballs, 100% success was achieved for subretinal (<i>n</i> = 16), CRV (<i>n</i> = 16), and BRV (<i>n</i> = 16) injections. Compared with manual and teleoperated robotic surgeries, positioning errors were reduced by 79.87 and 54.61%, respectively. These results demonstrate the clinical feasibility of an autonomous intraocular microsurgical robot and its ability to enhance injection precision, safety, and consistency.</div>","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"11 110","pages":""},"PeriodicalIF":27.5,"publicationDate":"2026-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145964504","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficacy and effectiveness of robot-assisted therapy for autism spectrum disorder: From lab to reality 机器人辅助治疗自闭症谱系障碍的疗效和效果:从实验室到现实
IF 27.5 1区 计算机科学 Q1 ROBOTICS Pub Date : 2025-12-24 DOI: 10.1126/scirobotics.adl2266
Daniel David, Paul Baxter, Tony Belpaeme, Erik Billing, Haibin Cai, Hoang-Long Cao, Anamaria Ciocan, Cristina Costescu, Daniel Hernandez Garcia, Pablo Gómez Esteban, James Kennedy, Honghai Liu, Silviu Matu, Alexandre Mazel, Mihaela Selescu, Emmanuel Senft, Serge Thill, Bram Vanderborght, David Vernon, Tom Ziemke
The use of social robots in therapy for children with autism has been explored for more than 20 years, but there still is limited clinical evidence. The work presented here provides a systematic approach to evaluating both efficacy and effectiveness, bridging the gap between theory and practice by targeting joint attention, imitation, and turn-taking as core developmental mechanisms that can make a difference in autism interventions. We present two randomized clinical trials with different robot-assisted therapy implementations aimed at young children. The first is an efficacy trial (n = 69; mean age = 4.4 years) showing that 12 biweekly sessions of in-clinic robot-assisted therapy achieve equivalent outcomes to conventional treatment but with a significant increase in the patients’ engagement. The second trial (n = 63; mean age = 5.9 years) evaluates the effectiveness in real-world settings by substituting the clinical setup with a simpler one for use in schools or homes. Over the course of a modest dosage of five sessions, we show equivalent outcomes to standard treatment. Both efficacy and effectiveness trials lend further credibility to the beneficial role that social robots can play in autism therapy while also highlighting the potential advantages of portable and cost-effective setups.
社交机器人在自闭症儿童治疗中的应用已经探索了20多年,但临床证据仍然有限。本文提出的工作提供了一种系统的方法来评估疗效和有效性,通过将共同注意、模仿和轮流作为自闭症干预的核心发展机制,弥合了理论与实践之间的差距。我们提出了两个随机临床试验不同的机器人辅助治疗实施针对幼儿。第一个是疗效试验(n = 69,平均年龄= 4.4岁),显示12次两周一次的临床机器人辅助治疗取得了与传统治疗相同的结果,但患者的参与度显著增加。第二项试验(n = 63;平均年龄= 5.9岁)通过用一个更简单的用于学校或家庭的临床设置代替临床设置来评估在现实环境中的有效性。在五个疗程的适度剂量过程中,我们显示出与标准治疗相同的结果。功效和有效性试验都进一步证明了社交机器人在自闭症治疗中发挥的有益作用,同时也强调了便携和成本效益高的设备的潜在优势。
{"title":"Efficacy and effectiveness of robot-assisted therapy for autism spectrum disorder: From lab to reality","authors":"Daniel David,&nbsp;Paul Baxter,&nbsp;Tony Belpaeme,&nbsp;Erik Billing,&nbsp;Haibin Cai,&nbsp;Hoang-Long Cao,&nbsp;Anamaria Ciocan,&nbsp;Cristina Costescu,&nbsp;Daniel Hernandez Garcia,&nbsp;Pablo Gómez Esteban,&nbsp;James Kennedy,&nbsp;Honghai Liu,&nbsp;Silviu Matu,&nbsp;Alexandre Mazel,&nbsp;Mihaela Selescu,&nbsp;Emmanuel Senft,&nbsp;Serge Thill,&nbsp;Bram Vanderborght,&nbsp;David Vernon,&nbsp;Tom Ziemke","doi":"10.1126/scirobotics.adl2266","DOIUrl":"10.1126/scirobotics.adl2266","url":null,"abstract":"<div >The use of social robots in therapy for children with autism has been explored for more than 20 years, but there still is limited clinical evidence. The work presented here provides a systematic approach to evaluating both efficacy and effectiveness, bridging the gap between theory and practice by targeting joint attention, imitation, and turn-taking as core developmental mechanisms that can make a difference in autism interventions. We present two randomized clinical trials with different robot-assisted therapy implementations aimed at young children. The first is an efficacy trial (<i>n</i> = 69; mean age = 4.4 years) showing that 12 biweekly sessions of in-clinic robot-assisted therapy achieve equivalent outcomes to conventional treatment but with a significant increase in the patients’ engagement. The second trial (<i>n</i> = 63; mean age = 5.9 years) evaluates the effectiveness in real-world settings by substituting the clinical setup with a simpler one for use in schools or homes. Over the course of a modest dosage of five sessions, we show equivalent outcomes to standard treatment. Both efficacy and effectiveness trials lend further credibility to the beneficial role that social robots can play in autism therapy while also highlighting the potential advantages of portable and cost-effective setups.</div>","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"10 109","pages":""},"PeriodicalIF":27.5,"publicationDate":"2025-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145813724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reconfigurable aerial robot for subterranean mine inspection 用于地下矿井探测的可重构航空机器人。
IF 27.5 1区 计算机科学 Q1 ROBOTICS Pub Date : 2025-12-24 DOI: 10.1126/scirobotics.aee7991
Amos Matsiko
A reconfigurable aerial robot enables inspection of subterranean environments accessible only through narrow boreholes.
一种可重新配置的空中机器人可以检查地下环境,只有通过狭窄的钻孔才能进入。
{"title":"Reconfigurable aerial robot for subterranean mine inspection","authors":"Amos Matsiko","doi":"10.1126/scirobotics.aee7991","DOIUrl":"10.1126/scirobotics.aee7991","url":null,"abstract":"<div >A reconfigurable aerial robot enables inspection of subterranean environments accessible only through narrow boreholes.</div>","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"10 109","pages":""},"PeriodicalIF":27.5,"publicationDate":"2025-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145823992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robots do not like Asimov’s three laws 机器人不喜欢阿西莫夫的三大定律。
IF 27.5 1区 计算机科学 Q1 ROBOTICS Pub Date : 2025-12-24 DOI: 10.1126/scirobotics.aee0315
Robin R. Murphy
In The Downloaded, a robot cripples a roboticist for promoting Asimov’s three laws of robotics.
在《下载》中,一个机器人专家因为推广阿西莫夫的机器人三定律而致残。
{"title":"Robots do not like Asimov’s three laws","authors":"Robin R. Murphy","doi":"10.1126/scirobotics.aee0315","DOIUrl":"10.1126/scirobotics.aee0315","url":null,"abstract":"<div >In <i>The Downloaded</i>, a robot cripples a roboticist for promoting Asimov’s three laws of robotics.</div>","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"10 109","pages":""},"PeriodicalIF":27.5,"publicationDate":"2025-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145823993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
How reliable is robotic manipulation in the real world? 机器人操作在现实世界中有多可靠?
IF 27.5 1区 计算机科学 Q1 ROBOTICS Pub Date : 2025-12-17 DOI: 10.1126/scirobotics.adz6787
Robert D. Howe, Zixi Liu
The reliability of manipulation in unstructured environments is unknown, but 1 in 10,000 dropped items may be acceptable.
在非结构化环境中操作的可靠性是未知的,但1万分之一的掉落物品可能是可以接受的。
{"title":"How reliable is robotic manipulation in the real world?","authors":"Robert D. Howe,&nbsp;Zixi Liu","doi":"10.1126/scirobotics.adz6787","DOIUrl":"10.1126/scirobotics.adz6787","url":null,"abstract":"<div >The reliability of manipulation in unstructured environments is unknown, but 1 in 10,000 dropped items may be acceptable.</div>","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"10 109","pages":""},"PeriodicalIF":27.5,"publicationDate":"2025-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145771574","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Soft deployable airless wheel for lunar lava tube intact exploration 用于月球熔岩管完整探测的软展开无气轮
IF 27.5 1区 计算机科学 Q1 ROBOTICS Pub Date : 2025-12-17 DOI: 10.1126/scirobotics.adx2549
Seong-Bin Lee, Namsuk Cho, Geonho Lee, Seungju Lee, Junseo Kim, Gyujin Shim, Jong Tai Jang, Se Kwon Kim, TaeWon Seo, Chae Kyung Sim, Dae-Young Lee
Lunar pits and lava tubes hold promise for future human habitation, offering natural protection and stable environments. However, exploring these sites entails challenging terrain, including steep slopes along cave funnels and vertical cliffs. Here, we present a soft, deployable airless wheel to address these challenges. By achieving a high deployment ratio, multiple rovers can be stowed efficiently without sacrificing mobility, thereby improving mission reliability and flexibility. The proposed wheel incorporates a reconfigurable reciprocal structure of elastic steel strips arranged in a woven helical pattern, enabling shape transformations while preserving load-bearing capacity. This reciprocal arrangement also allows for safe vertical descents and mitigates damage from accidental falls in caves. By distributing strain throughout the wheel’s body, reliance on delicate mechanical components is minimized—a critical advantage under extreme lunar conditions. The wheel can be stowed at a diameter of 230 millimeters and deployed to 500 millimeters. Experimental results show successful traversal of 200-millimeter obstacles, stable mobility on rocky and lunar soil simulant surfaces, and resilience to drop impacts simulating a 100-meter descent under lunar gravity. These findings underscore the wheel’s suitability for future pit and cave exploration, even in harsh lunar environments.
月球坑和熔岩管为未来的人类居住提供了自然保护和稳定的环境。然而,探索这些地点需要具有挑战性的地形,包括沿着洞穴漏斗和垂直悬崖的陡峭斜坡。在这里,我们提出了一种软的、可展开的无气轮来解决这些挑战。通过实现高部署比例,可以在不牺牲机动性的情况下高效装载多个漫游车,从而提高任务的可靠性和灵活性。所提出的车轮包含一个可重构的相互结构,弹性钢带排列成编织的螺旋图案,在保持承载能力的同时实现形状转换。这种相互作用的安排也允许安全的垂直下降,并减轻意外落入洞穴的损害。通过在整个车轮上分配压力,对精密机械部件的依赖被降到最低——在极端的月球条件下,这是一个关键的优势。车轮可以装载直径为230毫米,展开到500毫米。实验结果表明,该系统成功穿越了200毫米的障碍物,在岩石和月球土壤模拟表面上具有稳定的移动能力,并且能够在月球重力下模拟100米下降的跌落冲击。这些发现强调,即使在恶劣的月球环境下,车轮也适合未来的坑洞和洞穴探险。
{"title":"Soft deployable airless wheel for lunar lava tube intact exploration","authors":"Seong-Bin Lee,&nbsp;Namsuk Cho,&nbsp;Geonho Lee,&nbsp;Seungju Lee,&nbsp;Junseo Kim,&nbsp;Gyujin Shim,&nbsp;Jong Tai Jang,&nbsp;Se Kwon Kim,&nbsp;TaeWon Seo,&nbsp;Chae Kyung Sim,&nbsp;Dae-Young Lee","doi":"10.1126/scirobotics.adx2549","DOIUrl":"10.1126/scirobotics.adx2549","url":null,"abstract":"<div >Lunar pits and lava tubes hold promise for future human habitation, offering natural protection and stable environments. However, exploring these sites entails challenging terrain, including steep slopes along cave funnels and vertical cliffs. Here, we present a soft, deployable airless wheel to address these challenges. By achieving a high deployment ratio, multiple rovers can be stowed efficiently without sacrificing mobility, thereby improving mission reliability and flexibility. The proposed wheel incorporates a reconfigurable reciprocal structure of elastic steel strips arranged in a woven helical pattern, enabling shape transformations while preserving load-bearing capacity. This reciprocal arrangement also allows for safe vertical descents and mitigates damage from accidental falls in caves. By distributing strain throughout the wheel’s body, reliance on delicate mechanical components is minimized—a critical advantage under extreme lunar conditions. The wheel can be stowed at a diameter of 230 millimeters and deployed to 500 millimeters. Experimental results show successful traversal of 200-millimeter obstacles, stable mobility on rocky and lunar soil simulant surfaces, and resilience to drop impacts simulating a 100-meter descent under lunar gravity. These findings underscore the wheel’s suitability for future pit and cave exploration, even in harsh lunar environments.</div>","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"10 109","pages":""},"PeriodicalIF":27.5,"publicationDate":"2025-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145765537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning robot behavior from human-human interactions 从人与人之间的互动中学习机器人的行为。
IF 27.5 1区 计算机科学 Q1 ROBOTICS Pub Date : 2025-12-17 DOI: 10.1126/scirobotics.aee5779
Melisa Yashinski
A model trained by observing human-human interactions produces more natural robot behavior during human-robot interaction.
通过观察人机交互训练的模型在人机交互过程中产生更自然的机器人行为。
{"title":"Learning robot behavior from human-human interactions","authors":"Melisa Yashinski","doi":"10.1126/scirobotics.aee5779","DOIUrl":"10.1126/scirobotics.aee5779","url":null,"abstract":"<div >A model trained by observing human-human interactions produces more natural robot behavior during human-robot interaction.</div>","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"10 109","pages":""},"PeriodicalIF":27.5,"publicationDate":"2025-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145771097","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep learning–based autonomous retinal vein cannulation in ex vivo porcine eyes 基于深度学习的离体猪眼自主视网膜静脉插管
IF 27.5 1区 计算机科学 Q1 ROBOTICS Pub Date : 2025-12-17 DOI: 10.1126/scirobotics.adw2969
Peiyao Zhang, Peter Gehlbach, Russell H. Taylor, Iulian Iordachita, Marin Kobilarov
Retinal vein cannulation (RVC) is an emerging method for treating retinal vein occlusion (RVO). The success of this procedure depends on surgeon expertise and, recently, robotic assistance. This paper proposes an autonomous RVC workflow leveraging deep learning and computer vision. Two Steady-Hand Eye Robots (SHERs) controlled a 100-micrometer metal needle and a medical spatula to execute precise tasks. Three convolutional neural networks were trained to predict needle movement direction and identify contact and puncture events. A surgical microscope with an intraoperative optical coherence tomography (iOCT) system captured the surgical field through a microscope and cross-sectional images (B-scans). The goal was to enable the robot to autonomously carry out the critical steps of the RVC procedure, especially those that are challenging and require expert knowledge. The less technically demanding tasks were assigned to the user, who also supervised the robot during these steps. Our method was tested on 20 ex vivo porcine eyes, achieving a success rate of 90%. In addition, we simulated eye movements caused by breathing on six other ex vivo porcine eyes. With the eyes moving in a sinusoidal pattern, we achieved a success rate of 83%, demonstrating the robustness and stability of the proposed workflow. Our results demonstrate that the autonomous RVC workflow, incorporating deep learning and robotic assistance, achieves high success rates in both static and dynamic conditions, indicating its potential to enhance the precision and reliability of RVO treatment.
视网膜静脉插管(RVC)是一种新兴的治疗视网膜静脉闭塞(RVO)的方法。手术的成功取决于外科医生的专业技术以及最近的机器人辅助。本文提出了一种利用深度学习和计算机视觉的自主RVC工作流。两个稳定手眼机器人(SHERs)控制着一根100微米的金属针和一把医用刮刀来执行精确的任务。训练三个卷积神经网络来预测针头的运动方向,识别接触和穿刺事件。手术显微镜与术中光学相干断层扫描(iOCT)系统通过显微镜和横切面图像(b扫描)捕获手术场。目标是使机器人能够自主执行RVC程序的关键步骤,特别是那些具有挑战性和需要专业知识的步骤。技术要求较低的任务被分配给用户,用户在这些步骤中也监督机器人。我们的方法在20只离体猪眼上进行了实验,成功率为90%。此外,我们还在另外六只离体猪的眼睛上模拟了呼吸引起的眼球运动。在眼睛以正弦模式移动的情况下,我们取得了83%的成功率,证明了所提出工作流程的鲁棒性和稳定性。我们的研究结果表明,结合深度学习和机器人辅助的自主RVC工作流程在静态和动态条件下都取得了很高的成功率,这表明它有可能提高RVO治疗的精度和可靠性。
{"title":"Deep learning–based autonomous retinal vein cannulation in ex vivo porcine eyes","authors":"Peiyao Zhang,&nbsp;Peter Gehlbach,&nbsp;Russell H. Taylor,&nbsp;Iulian Iordachita,&nbsp;Marin Kobilarov","doi":"10.1126/scirobotics.adw2969","DOIUrl":"10.1126/scirobotics.adw2969","url":null,"abstract":"<div >Retinal vein cannulation (RVC) is an emerging method for treating retinal vein occlusion (RVO). The success of this procedure depends on surgeon expertise and, recently, robotic assistance. This paper proposes an autonomous RVC workflow leveraging deep learning and computer vision. Two Steady-Hand Eye Robots (SHERs) controlled a 100-micrometer metal needle and a medical spatula to execute precise tasks. Three convolutional neural networks were trained to predict needle movement direction and identify contact and puncture events. A surgical microscope with an intraoperative optical coherence tomography (iOCT) system captured the surgical field through a microscope and cross-sectional images (B-scans). The goal was to enable the robot to autonomously carry out the critical steps of the RVC procedure, especially those that are challenging and require expert knowledge. The less technically demanding tasks were assigned to the user, who also supervised the robot during these steps. Our method was tested on 20 ex vivo porcine eyes, achieving a success rate of 90%. In addition, we simulated eye movements caused by breathing on six other ex vivo porcine eyes. With the eyes moving in a sinusoidal pattern, we achieved a success rate of 83%, demonstrating the robustness and stability of the proposed workflow. Our results demonstrate that the autonomous RVC workflow, incorporating deep learning and robotic assistance, achieves high success rates in both static and dynamic conditions, indicating its potential to enhance the precision and reliability of RVO treatment.</div>","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"10 109","pages":""},"PeriodicalIF":27.5,"publicationDate":"2025-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145765566","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Resilient odometry via hierarchical adaptation 基于层次适应的弹性里程计
IF 27.5 1区 计算机科学 Q1 ROBOTICS Pub Date : 2025-12-10 DOI: 10.1126/scirobotics.adv1818
Shibo Zhao, Sifan Zhou, Yuchen Zhang, Ji Zhang, Chen Wang, Wenshan Wang, Sebastian Scherer
Resilient and robust odometry is crucial for autonomous systems operating in complex and dynamic environments. Existing odometry systems often struggle with severe sensory degradations and extreme conditions such as smoke, sandstorms, snow, or low-light conditions, threatening both the safety and functionality of robots. To address these challenges, we present Super Odometry, a sensor fusion framework that dynamically adapts to varying levels of environmental degradation. Super Odometry uses a hierarchical structure to integrate four core modules from lower-level to higher-level adaptability, including adaptive feature selection, adaptive state direction selection, adaptive engine selection, and a learning-based inertial odometry. The inertial odometry, trained on more than 100 hours of heterogeneous robotic platforms, captures comprehensive motion dynamics. Super Odometry elevates the inertial measurement unit to equal importance with camera and light detection and ranging (LiDAR) systems in the sensor fusion framework, providing a reliable fallback when exteroceptive sensors fail. Super Odometry has been validated across 200 kilometers and 800 operational hours on a fleet of aerial, wheeled, and legged robots and under diverse sensor configurations, environmental degradation, and aggressive motion profiles. It marks an important step toward safe and long-term robotic autonomy in all-degraded environments.
弹性和鲁棒的里程计对于在复杂和动态环境中运行的自主系统至关重要。现有的里程计系统经常与严重的感官退化和极端条件(如烟雾、沙尘暴、雪或弱光条件)作斗争,威胁到机器人的安全和功能。为了应对这些挑战,我们提出了Super Odometry,这是一种传感器融合框架,可以动态适应不同程度的环境退化。Super Odometry采用分层结构,将自适应特征选择、自适应状态方向选择、自适应发动机选择和基于学习的惯性里程计四个核心模块从低级到高级集成在一起。惯性里程计在超过100小时的异构机器人平台上进行了训练,捕获了全面的运动动力学。在传感器融合框架中,Super Odometry将惯性测量单元提升到与相机和光探测和测距(LiDAR)系统同等重要的地位,在外部感知传感器出现故障时提供可靠的后备方案。Super Odometry已经在空中、轮式和腿式机器人上运行了200公里,运行了800小时,并在不同的传感器配置、环境退化和激进运动概况下进行了验证。它标志着机器人在所有退化环境中安全、长期自主的重要一步。
{"title":"Resilient odometry via hierarchical adaptation","authors":"Shibo Zhao,&nbsp;Sifan Zhou,&nbsp;Yuchen Zhang,&nbsp;Ji Zhang,&nbsp;Chen Wang,&nbsp;Wenshan Wang,&nbsp;Sebastian Scherer","doi":"10.1126/scirobotics.adv1818","DOIUrl":"10.1126/scirobotics.adv1818","url":null,"abstract":"<div >Resilient and robust odometry is crucial for autonomous systems operating in complex and dynamic environments. Existing odometry systems often struggle with severe sensory degradations and extreme conditions such as smoke, sandstorms, snow, or low-light conditions, threatening both the safety and functionality of robots. To address these challenges, we present Super Odometry, a sensor fusion framework that dynamically adapts to varying levels of environmental degradation. Super Odometry uses a hierarchical structure to integrate four core modules from lower-level to higher-level adaptability, including adaptive feature selection, adaptive state direction selection, adaptive engine selection, and a learning-based inertial odometry. The inertial odometry, trained on more than 100 hours of heterogeneous robotic platforms, captures comprehensive motion dynamics. Super Odometry elevates the inertial measurement unit to equal importance with camera and light detection and ranging (LiDAR) systems in the sensor fusion framework, providing a reliable fallback when exteroceptive sensors fail. Super Odometry has been validated across 200 kilometers and 800 operational hours on a fleet of aerial, wheeled, and legged robots and under diverse sensor configurations, environmental degradation, and aggressive motion profiles. It marks an important step toward safe and long-term robotic autonomy in all-degraded environments.</div>","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"10 109","pages":""},"PeriodicalIF":27.5,"publicationDate":"2025-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145711160","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Microscopic robots that sense, think, act, and compute 能感知、思考、行动和计算的微型机器人
IF 27.5 1区 计算机科学 Q1 ROBOTICS Pub Date : 2025-12-10 DOI: 10.1126/scirobotics.adu8009
Maya M. Lassiter, Jungho Lee, Kyle Skelil, Li Xu, Lucas Hanson, William H. Reinhardt, Dennis Sylvester, Mark Yim, David Blaauw, Marc Z. Miskin
Although miniaturization has been a goal in robotics for nearly 40 years, roboticists have struggled to access submillimeter dimensions without making sacrifices to onboard information processing because of the unique physics of the microscale. Consequently, microrobots often lack the key features that distinguish their macroscopic cousins from other machines, namely, on-robot systems for decision-making, sensing, feedback, and programmable computation. Here, we take up the challenge of building a robot comparable in size to a single-celled paramecium that can sense, think, and act using onboard systems for computation, sensing, memory, locomotion, and communication. Built massively in parallel with fully lithographic processing, these microrobots can execute digitally defined algorithms and autonomously change behavior in response to their surroundings. Combined, these results pave the way for general-purpose microrobots that can be programmed many times in a simple setup and can work together to carry out tasks without supervision in uncertain environments.
虽然小型化是机器人技术近40年来的目标,但由于微尺度的独特物理特性,机器人专家一直在努力在不牺牲机载信息处理的情况下达到亚毫米的尺寸。因此,微型机器人往往缺乏将其宏观表亲与其他机器区分开来的关键特征,即用于决策、传感、反馈和可编程计算的机器人系统。在这里,我们接受了一个挑战,建造一个大小与单细胞草履虫相当的机器人,它可以感知、思考和行动,使用机载系统进行计算、传感、记忆、运动和通信。这些微型机器人与全光刻处理并行大规模构建,可以执行数字定义的算法,并根据周围环境自主改变行为。综合起来,这些结果为通用微型机器人铺平了道路,这些微型机器人可以在一个简单的设置中多次编程,并且可以在不确定的环境中协同工作,在没有监督的情况下执行任务。
{"title":"Microscopic robots that sense, think, act, and compute","authors":"Maya M. Lassiter,&nbsp;Jungho Lee,&nbsp;Kyle Skelil,&nbsp;Li Xu,&nbsp;Lucas Hanson,&nbsp;William H. Reinhardt,&nbsp;Dennis Sylvester,&nbsp;Mark Yim,&nbsp;David Blaauw,&nbsp;Marc Z. Miskin","doi":"10.1126/scirobotics.adu8009","DOIUrl":"10.1126/scirobotics.adu8009","url":null,"abstract":"<div >Although miniaturization has been a goal in robotics for nearly 40 years, roboticists have struggled to access submillimeter dimensions without making sacrifices to onboard information processing because of the unique physics of the microscale. Consequently, microrobots often lack the key features that distinguish their macroscopic cousins from other machines, namely, on-robot systems for decision-making, sensing, feedback, and programmable computation. Here, we take up the challenge of building a robot comparable in size to a single-celled paramecium that can sense, think, and act using onboard systems for computation, sensing, memory, locomotion, and communication. Built massively in parallel with fully lithographic processing, these microrobots can execute digitally defined algorithms and autonomously change behavior in response to their surroundings. Combined, these results pave the way for general-purpose microrobots that can be programmed many times in a simple setup and can work together to carry out tasks without supervision in uncertain environments.</div>","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"10 109","pages":""},"PeriodicalIF":27.5,"publicationDate":"2025-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.science.org/doi/reader/10.1126/scirobotics.adu8009","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145711159","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Science Robotics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1