Pub Date : 2026-01-28DOI: 10.1126/scirobotics.ady2869
Qi Ye, Qingtao Liu, Siyun Wang, Jiaying Chen, Yu Cui, Ke Jin, Huajin Chen, Xuan Cai, Gaofeng Li, Jiming Chen
Achieving humanlike dexterity with anthropomorphic multifingered robotic hands requires precise finger coordination. However, dexterous manipulation remains highly challenging because of high-dimensional action-observation spaces, complex hand-object contact dynamics, and frequent occlusions. To address this, we drew inspiration from the human learning paradigm of observation and practice and propose a two-stage learning framework by learning visual-tactile integration representations via self-supervised learning from human demonstrations. We trained a unified multitask policy through reinforcement learning and online imitation learning. This decoupled learning enabled the robot to acquire generalizable manipulation skills using only monocular images and simple binary tactile signals. With the unified policy, we built a multifingered hand manipulation system that performs multiple complicated tasks with low-cost sensing. It achieved an 85% success rate across five complex tasks and 25 objects and further generalized to three unseen tasks that share similar hand-object coordination patterns with the training tasks.
{"title":"Visual-tactile pretraining and online multitask learning for humanlike manipulation dexterity","authors":"Qi Ye, Qingtao Liu, Siyun Wang, Jiaying Chen, Yu Cui, Ke Jin, Huajin Chen, Xuan Cai, Gaofeng Li, Jiming Chen","doi":"10.1126/scirobotics.ady2869","DOIUrl":"10.1126/scirobotics.ady2869","url":null,"abstract":"<div >Achieving humanlike dexterity with anthropomorphic multifingered robotic hands requires precise finger coordination. However, dexterous manipulation remains highly challenging because of high-dimensional action-observation spaces, complex hand-object contact dynamics, and frequent occlusions. To address this, we drew inspiration from the human learning paradigm of observation and practice and propose a two-stage learning framework by learning visual-tactile integration representations via self-supervised learning from human demonstrations. We trained a unified multitask policy through reinforcement learning and online imitation learning. This decoupled learning enabled the robot to acquire generalizable manipulation skills using only monocular images and simple binary tactile signals. With the unified policy, we built a multifingered hand manipulation system that performs multiple complicated tasks with low-cost sensing. It achieved an 85% success rate across five complex tasks and 25 objects and further generalized to three unseen tasks that share similar hand-object coordination patterns with the training tasks.</div>","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"11 110","pages":""},"PeriodicalIF":27.5,"publicationDate":"2026-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146069929","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-28DOI: 10.1126/scirobotics.adw7868
Xiangxiao Liu, François A. Longchamp, Luca Zunino, Louis Gevers, Lisa R. Schneider, Selina I. Bothner, André Guignard, Alessandro Crespi, Guillaume Bellegarda, Alexandre Bernardino, Eva A. Naumann, Auke J. Ijspeert
Many aquatic animals, including larval zebrafish, exhibit intermittent locomotion, moving via discrete swimming bouts followed by passive glides rather than continuous movement. However, fundamental questions remain unresolved: What neural mechanisms drive this behavior, and what functional benefits does this behavior offer? Specifically, is intermittent swimming more energy efficient than continuous swimming, and, if so, by what mechanism? Live-animal experiments pose technical challenges, because observing or manipulating internal physiological states in freely swimming animals is difficult. Hence, we developed ZBot, a bioinspired robot that replicates the morphological features of larval zebrafish. Embedding a network model inspired by neural circuits and kinematic recordings of larval zebrafish, ZBot reproduces diverse swimming gaits of larval zebrafish bout-and-glide locomotion. By testing ZBot swimming in both turbulent and viscous flow regimes, we confirm that viscous flow markedly reduces traveled distance but minimally affects turning angles. We further tested ZBot in these regimes to analyze how key parameters (tail-beating frequency and amplitude) influence velocity and power use. Our results show that intermittent swimming lowers the energetic cost of transport across most achievable velocities in both flow regimes. Although prior work linked this efficiency to fluid dynamics, like reduced glide drag, we identify an extra mechanism: better actuator efficiency. Mechanistically, this benefit arises because intermittent locomotion shifts the robot’s actuators to higher inherent efficiency. This work introduces a fishlike robot capable of biomimetic intermittent swimming—with demonstrated energy advantages at relevant speeds—and provides general insights into the factors shaping locomotor behavior and efficiency in aquatic animals.
{"title":"Energy efficiency and neural control of continuous versus intermittent swimming in a fishlike robot","authors":"Xiangxiao Liu, François A. Longchamp, Luca Zunino, Louis Gevers, Lisa R. Schneider, Selina I. Bothner, André Guignard, Alessandro Crespi, Guillaume Bellegarda, Alexandre Bernardino, Eva A. Naumann, Auke J. Ijspeert","doi":"10.1126/scirobotics.adw7868","DOIUrl":"10.1126/scirobotics.adw7868","url":null,"abstract":"<div >Many aquatic animals, including larval zebrafish, exhibit intermittent locomotion, moving via discrete swimming bouts followed by passive glides rather than continuous movement. However, fundamental questions remain unresolved: What neural mechanisms drive this behavior, and what functional benefits does this behavior offer? Specifically, is intermittent swimming more energy efficient than continuous swimming, and, if so, by what mechanism? Live-animal experiments pose technical challenges, because observing or manipulating internal physiological states in freely swimming animals is difficult. Hence, we developed ZBot, a bioinspired robot that replicates the morphological features of larval zebrafish. Embedding a network model inspired by neural circuits and kinematic recordings of larval zebrafish, ZBot reproduces diverse swimming gaits of larval zebrafish bout-and-glide locomotion. By testing ZBot swimming in both turbulent and viscous flow regimes, we confirm that viscous flow markedly reduces traveled distance but minimally affects turning angles. We further tested ZBot in these regimes to analyze how key parameters (tail-beating frequency and amplitude) influence velocity and power use. Our results show that intermittent swimming lowers the energetic cost of transport across most achievable velocities in both flow regimes. Although prior work linked this efficiency to fluid dynamics, like reduced glide drag, we identify an extra mechanism: better actuator efficiency. Mechanistically, this benefit arises because intermittent locomotion shifts the robot’s actuators to higher inherent efficiency. This work introduces a fishlike robot capable of biomimetic intermittent swimming—with demonstrated energy advantages at relevant speeds—and provides general insights into the factors shaping locomotor behavior and efficiency in aquatic animals.</div>","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"11 110","pages":""},"PeriodicalIF":27.5,"publicationDate":"2026-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146069933","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-28DOI: 10.1126/scirobotics.aee5782
Sudharshan Suresh
Visuotactile pretraining with human data leads to robust manipulation policies trained in simulation.
使用人类数据进行视觉预训练,可以在模拟中训练出鲁棒的操作策略。
{"title":"Within arm’s reach: A path forward for robot dexterity","authors":"Sudharshan Suresh","doi":"10.1126/scirobotics.aee5782","DOIUrl":"10.1126/scirobotics.aee5782","url":null,"abstract":"<div >Visuotactile pretraining with human data leads to robust manipulation policies trained in simulation.</div>","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"11 110","pages":""},"PeriodicalIF":27.5,"publicationDate":"2026-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146069927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-28DOI: 10.1126/scirobotics.aee3862
Daniel B. Quinn
The motor efficiency of a zebrafish-like robot helps to explain the advantages of burst-and-coast swimming.
类似斑马鱼的机器人的马达效率有助于解释突发性和海岸游泳的优势。
{"title":"Is intermittent swimming lazy or clever?","authors":"Daniel B. Quinn","doi":"10.1126/scirobotics.aee3862","DOIUrl":"10.1126/scirobotics.aee3862","url":null,"abstract":"<div >The motor efficiency of a zebrafish-like robot helps to explain the advantages of burst-and-coast swimming.</div>","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"11 110","pages":""},"PeriodicalIF":27.5,"publicationDate":"2026-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146069928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-21DOI: 10.1126/scirobotics.aef4218
Amos Matsiko
A soft robotics, self-guided intubation device is capable of fast and safe airway access with minimal user training.
一个软机器人,自我引导插管装置能够快速和安全的气道访问与最少的用户培训。
{"title":"A self-guided intubation device","authors":"Amos Matsiko","doi":"10.1126/scirobotics.aef4218","DOIUrl":"10.1126/scirobotics.aef4218","url":null,"abstract":"<div >A soft robotics, self-guided intubation device is capable of fast and safe airway access with minimal user training.</div>","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"11 110","pages":""},"PeriodicalIF":27.5,"publicationDate":"2026-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146020862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-21DOI: 10.1126/scirobotics.aef4236
Melisa Yashinski
The OriRing achieves a high power-to-weight ratio with origami-inspired joints powered by a soft pneumatic actuator.
OriRing实现了高功率重量比,由软气动执行器驱动的折纸式关节。
{"title":"Lightweight haptic ring delivers high force feedback","authors":"Melisa Yashinski","doi":"10.1126/scirobotics.aef4236","DOIUrl":"10.1126/scirobotics.aef4236","url":null,"abstract":"<div >The OriRing achieves a high power-to-weight ratio with origami-inspired joints powered by a soft pneumatic actuator.</div>","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"11 110","pages":""},"PeriodicalIF":27.5,"publicationDate":"2026-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146015370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Living architectures, such as beehives and ant bridges, adapt continuously to their environments through self-organization of swarming agents. In contrast, most human-made architecture remains static, unable to respond to changing climates or occupant needs. Despite advances in biomimicry within architecture, architectural systems still lack the self-organizing dynamics found in natural swarms. In this work, we introduce the concept of architectural swarms: systems that integrate swarm intelligence and robotics into modular architectural façades to enable responsiveness to environmental conditions and human preferences. We present the Swarm Garden, a proof of concept composed of robotic modules called SGbots. Each SGbot features buckling-sheet actuation, sensing, computation, and wireless communication. SGbots can be networked into reconfigurable spatial systems that exhibit collective behavior, forming a testbed for exploring architectural swarm applications. We demonstrate two application case studies. The first explores adaptive shading using self-organization, where SGbots respond to sunlight using a swarm controller based on opinion dynamics. In a 16-SGbot deployment on an office window, the system adapted effectively to sunlight, showing robustness to sensor failures and different climates. Simulations demonstrated scalability and tunability in larger spaces. The second study explores creative expression in interior design, with 36 SGbots responding to human interaction during a public exhibition, including a live dance performance mediated by a wearable device. Results show that the system was engaging and visually compelling, with 96% positive attendee sentiments. The Swarm Garden exemplifies how architectural swarms can transform the built environment, enabling “living-like” architecture for functional and creative applications.
{"title":"Architectural swarms for responsive façades and creative expression","authors":"Merihan Alhafnawi, Jad Bendarkawi, Yenet Tafesse, Lucia Stein-Montalvo, Azariah Jones, Vicky Chow, Sigrid Adriaenssens, Radhika Nagpal","doi":"10.1126/scirobotics.ady7233","DOIUrl":"10.1126/scirobotics.ady7233","url":null,"abstract":"<div >Living architectures, such as beehives and ant bridges, adapt continuously to their environments through self-organization of swarming agents. In contrast, most human-made architecture remains static, unable to respond to changing climates or occupant needs. Despite advances in biomimicry within architecture, architectural systems still lack the self-organizing dynamics found in natural swarms. In this work, we introduce the concept of architectural swarms: systems that integrate swarm intelligence and robotics into modular architectural façades to enable responsiveness to environmental conditions and human preferences. We present the Swarm Garden, a proof of concept composed of robotic modules called SGbots. Each SGbot features buckling-sheet actuation, sensing, computation, and wireless communication. SGbots can be networked into reconfigurable spatial systems that exhibit collective behavior, forming a testbed for exploring architectural swarm applications. We demonstrate two application case studies. The first explores adaptive shading using self-organization, where SGbots respond to sunlight using a swarm controller based on opinion dynamics. In a 16-SGbot deployment on an office window, the system adapted effectively to sunlight, showing robustness to sensor failures and different climates. Simulations demonstrated scalability and tunability in larger spaces. The second study explores creative expression in interior design, with 36 SGbots responding to human interaction during a public exhibition, including a live dance performance mediated by a wearable device. Results show that the system was engaging and visually compelling, with 96% positive attendee sentiments. The Swarm Garden exemplifies how architectural swarms can transform the built environment, enabling “living-like” architecture for functional and creative applications.</div>","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"11 110","pages":""},"PeriodicalIF":27.5,"publicationDate":"2026-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146014884","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-14DOI: 10.1126/scirobotics.adx3017
Yuhang Hu, Jiong Lin, Judah Allen Goldfeder, Philippe M. Wyder, Yifeng Cao, Steven Tian, Yunzhe Wang, Jingran Wang, Mengmeng Wang, Jie Zeng, Cameron Mehlman, Yingke Wang, Delin Zeng, Boyuan Chen, Hod Lipson
Lip motion represents outsized importance in human communication, capturing nearly half of our visual attention during conversation. Yet anthropomorphic robots often fail to achieve lip-audio synchronization, resulting in clumsy and lifeless lip behaviors. Two fundamental barriers underlay this challenge. First, robotic lips typically lack the mechanical complexity required to reproduce nuanced human mouth movements; second, existing synchronization methods depend on manually predefined movements and rules, restricting adaptability and realism. Here, we present a humanoid robot face designed to overcome these limitations, featuring soft silicone lips actuated by a 10–degree-of-freedom mechanism. To achieve lip synchronization without predefined movements, we used a self-supervised learning pipeline based on a variational autoencoder (VAE) combined with a facial action transformer, enabling the robot to autonomously infer more realistic lip trajectories directly from speech audio. Our experimental results suggest that this method outperforms simple heuristics like amplitude-based baselines in achieving more visually coherent lip-audio synchronization. Furthermore, the learned synchronization successfully generalizes across multiple linguistic contexts, enabling robot speech articulation in 10 languages unseen during training.
{"title":"Learning realistic lip motions for humanoid face robots","authors":"Yuhang Hu, Jiong Lin, Judah Allen Goldfeder, Philippe M. Wyder, Yifeng Cao, Steven Tian, Yunzhe Wang, Jingran Wang, Mengmeng Wang, Jie Zeng, Cameron Mehlman, Yingke Wang, Delin Zeng, Boyuan Chen, Hod Lipson","doi":"10.1126/scirobotics.adx3017","DOIUrl":"10.1126/scirobotics.adx3017","url":null,"abstract":"<div >Lip motion represents outsized importance in human communication, capturing nearly half of our visual attention during conversation. Yet anthropomorphic robots often fail to achieve lip-audio synchronization, resulting in clumsy and lifeless lip behaviors. Two fundamental barriers underlay this challenge. First, robotic lips typically lack the mechanical complexity required to reproduce nuanced human mouth movements; second, existing synchronization methods depend on manually predefined movements and rules, restricting adaptability and realism. Here, we present a humanoid robot face designed to overcome these limitations, featuring soft silicone lips actuated by a 10–degree-of-freedom mechanism. To achieve lip synchronization without predefined movements, we used a self-supervised learning pipeline based on a variational autoencoder (VAE) combined with a facial action transformer, enabling the robot to autonomously infer more realistic lip trajectories directly from speech audio. Our experimental results suggest that this method outperforms simple heuristics like amplitude-based baselines in achieving more visually coherent lip-audio synchronization. Furthermore, the learned synchronization successfully generalizes across multiple linguistic contexts, enabling robot speech articulation in 10 languages unseen during training.</div>","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"11 110","pages":""},"PeriodicalIF":27.5,"publicationDate":"2026-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145964503","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Intraocular surgery is challenged by restricted environmental perception and difficulties in instrument depth estimation. The advent of autonomous intraocular surgery represents a milestone in medical technology, given that it can enhance surgical consistency that improves patient safety, shorten surgeon training periods so that more patients can undergo surgery, reduce dependency on human resources, and enable surgeries in remote or extreme environments. In this study, an autonomous robotic system for intraocular surgery (ARISE) was developed, achieving targeted retinal injections throughout the intraocular space. The robotic system achieves intelligent perception and macro/microprecision positioning of the instrument throughout the intraocular space through two key innovations. The first is a multiview spatial fusion that reconciles imaging feature disparities and corrects dynamic spatial misalignments. The second is a criterion-weighted fusion of multisensor data that mitigates inconsistencies in detection range, error magnitude, and sampling frequency. Subretinal and vascular injections were performed on eyeball phantoms, ex vivo porcine eyeballs, and in vivo animal eyeballs. In ex vivo porcine eyeballs, 100% success was achieved for subretinal (n = 20), central retinal vein (CRV) (n = 20), and branch retinal vein (BRV) (n = 20) injections; in in vivo animal eyeballs, 100% success was achieved for subretinal (n = 16), CRV (n = 16), and BRV (n = 16) injections. Compared with manual and teleoperated robotic surgeries, positioning errors were reduced by 79.87 and 54.61%, respectively. These results demonstrate the clinical feasibility of an autonomous intraocular microsurgical robot and its ability to enhance injection precision, safety, and consistency.
{"title":"Autonomous robotic intraocular surgery for targeted retinal injections","authors":"Gui-Bin Bian, Yawen Deng, Zhen Li, Qiang Ye, Yupeng Zhai, Yong Huang, Yingxiong Xie, Weihong Yu, Zhangwanyu Wei, Zhangguo Yu","doi":"10.1126/scirobotics.adx7359","DOIUrl":"10.1126/scirobotics.adx7359","url":null,"abstract":"<div >Intraocular surgery is challenged by restricted environmental perception and difficulties in instrument depth estimation. The advent of autonomous intraocular surgery represents a milestone in medical technology, given that it can enhance surgical consistency that improves patient safety, shorten surgeon training periods so that more patients can undergo surgery, reduce dependency on human resources, and enable surgeries in remote or extreme environments. In this study, an autonomous robotic system for intraocular surgery (ARISE) was developed, achieving targeted retinal injections throughout the intraocular space. The robotic system achieves intelligent perception and macro/microprecision positioning of the instrument throughout the intraocular space through two key innovations. The first is a multiview spatial fusion that reconciles imaging feature disparities and corrects dynamic spatial misalignments. The second is a criterion-weighted fusion of multisensor data that mitigates inconsistencies in detection range, error magnitude, and sampling frequency. Subretinal and vascular injections were performed on eyeball phantoms, ex vivo porcine eyeballs, and in vivo animal eyeballs. In ex vivo porcine eyeballs, 100% success was achieved for subretinal (<i>n</i> = 20), central retinal vein (CRV) (<i>n</i> = 20), and branch retinal vein (BRV) (<i>n</i> = 20) injections; in in vivo animal eyeballs, 100% success was achieved for subretinal (<i>n</i> = 16), CRV (<i>n</i> = 16), and BRV (<i>n</i> = 16) injections. Compared with manual and teleoperated robotic surgeries, positioning errors were reduced by 79.87 and 54.61%, respectively. These results demonstrate the clinical feasibility of an autonomous intraocular microsurgical robot and its ability to enhance injection precision, safety, and consistency.</div>","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"11 110","pages":""},"PeriodicalIF":27.5,"publicationDate":"2026-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145964504","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-24DOI: 10.1126/scirobotics.adl2266
Daniel David, Paul Baxter, Tony Belpaeme, Erik Billing, Haibin Cai, Hoang-Long Cao, Anamaria Ciocan, Cristina Costescu, Daniel Hernandez Garcia, Pablo Gómez Esteban, James Kennedy, Honghai Liu, Silviu Matu, Alexandre Mazel, Mihaela Selescu, Emmanuel Senft, Serge Thill, Bram Vanderborght, David Vernon, Tom Ziemke
The use of social robots in therapy for children with autism has been explored for more than 20 years, but there still is limited clinical evidence. The work presented here provides a systematic approach to evaluating both efficacy and effectiveness, bridging the gap between theory and practice by targeting joint attention, imitation, and turn-taking as core developmental mechanisms that can make a difference in autism interventions. We present two randomized clinical trials with different robot-assisted therapy implementations aimed at young children. The first is an efficacy trial (n = 69; mean age = 4.4 years) showing that 12 biweekly sessions of in-clinic robot-assisted therapy achieve equivalent outcomes to conventional treatment but with a significant increase in the patients’ engagement. The second trial (n = 63; mean age = 5.9 years) evaluates the effectiveness in real-world settings by substituting the clinical setup with a simpler one for use in schools or homes. Over the course of a modest dosage of five sessions, we show equivalent outcomes to standard treatment. Both efficacy and effectiveness trials lend further credibility to the beneficial role that social robots can play in autism therapy while also highlighting the potential advantages of portable and cost-effective setups.
{"title":"Efficacy and effectiveness of robot-assisted therapy for autism spectrum disorder: From lab to reality","authors":"Daniel David, Paul Baxter, Tony Belpaeme, Erik Billing, Haibin Cai, Hoang-Long Cao, Anamaria Ciocan, Cristina Costescu, Daniel Hernandez Garcia, Pablo Gómez Esteban, James Kennedy, Honghai Liu, Silviu Matu, Alexandre Mazel, Mihaela Selescu, Emmanuel Senft, Serge Thill, Bram Vanderborght, David Vernon, Tom Ziemke","doi":"10.1126/scirobotics.adl2266","DOIUrl":"10.1126/scirobotics.adl2266","url":null,"abstract":"<div >The use of social robots in therapy for children with autism has been explored for more than 20 years, but there still is limited clinical evidence. The work presented here provides a systematic approach to evaluating both efficacy and effectiveness, bridging the gap between theory and practice by targeting joint attention, imitation, and turn-taking as core developmental mechanisms that can make a difference in autism interventions. We present two randomized clinical trials with different robot-assisted therapy implementations aimed at young children. The first is an efficacy trial (<i>n</i> = 69; mean age = 4.4 years) showing that 12 biweekly sessions of in-clinic robot-assisted therapy achieve equivalent outcomes to conventional treatment but with a significant increase in the patients’ engagement. The second trial (<i>n</i> = 63; mean age = 5.9 years) evaluates the effectiveness in real-world settings by substituting the clinical setup with a simpler one for use in schools or homes. Over the course of a modest dosage of five sessions, we show equivalent outcomes to standard treatment. Both efficacy and effectiveness trials lend further credibility to the beneficial role that social robots can play in autism therapy while also highlighting the potential advantages of portable and cost-effective setups.</div>","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"10 109","pages":""},"PeriodicalIF":27.5,"publicationDate":"2025-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145813724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}