首页 > 最新文献

Science Robotics最新文献

英文 中文
Scalable robot collective resilience by sharing resources 可扩展的机器人集体弹性通过共享资源
IF 25 1区 计算机科学 Q1 ROBOTICS Pub Date : 2026-02-11 DOI: 10.1126/scirobotics.ady6304
Kevin Holdcroft, Anastasia Bolotnikova, Antoni Jubés Monforte, Jamie Paik
No system is immune to failure. The compromise between reducing failures and improving adaptability is a recurring problem in robotics. Modular robots exemplify this tradeoff, because the number of modules dictates both the possible functions and the odds of failure. We reverse this trend, improving reliability with an increased number of modules by exploiting redundant resources and sharing them locally. We present a unified methodology for local resource sharing; local power sharing balances energy distribution, hybrid communication spreads messages, and local sensor fusion propagates full system state estimate information among the robot collective. We present the experimental results of our methodology applied to a modular robot, Mori3. Despite one module being deprived of its own resources in terms of power, sensing, and communication, the robot collective can successfully perform a locomotion mission in a challenging environment, thanks to neighboring modules supporting each other via our proposed resource-sharing methodology.
任何系统都难免失败。在减少故障和提高适应性之间的妥协是机器人技术中一个反复出现的问题。模块化机器人体现了这种权衡,因为模块的数量决定了可能的功能和失败的几率。我们扭转了这一趋势,通过开发冗余资源并在本地共享模块数量的增加来提高可靠性。我们提出了一种统一的本地资源共享方法;局部功率共享平衡能量分配,混合通信传播信息,局部传感器融合在机器人群体中传播完整的系统状态估计信息。我们将我们的方法应用于模块化机器人Mori3的实验结果。尽管一个模块在电力、传感和通信方面被剥夺了自己的资源,但由于邻近模块通过我们提出的资源共享方法相互支持,机器人集体可以在具有挑战性的环境中成功执行运动任务。
{"title":"Scalable robot collective resilience by sharing resources","authors":"Kevin Holdcroft, Anastasia Bolotnikova, Antoni Jubés Monforte, Jamie Paik","doi":"10.1126/scirobotics.ady6304","DOIUrl":"https://doi.org/10.1126/scirobotics.ady6304","url":null,"abstract":"No system is immune to failure. The compromise between reducing failures and improving adaptability is a recurring problem in robotics. Modular robots exemplify this tradeoff, because the number of modules dictates both the possible functions and the odds of failure. We reverse this trend, improving reliability with an increased number of modules by exploiting redundant resources and sharing them locally. We present a unified methodology for local resource sharing; local power sharing balances energy distribution, hybrid communication spreads messages, and local sensor fusion propagates full system state estimate information among the robot collective. We present the experimental results of our methodology applied to a modular robot, Mori3. Despite one module being deprived of its own resources in terms of power, sensing, and communication, the robot collective can successfully perform a locomotion mission in a challenging environment, thanks to neighboring modules supporting each other via our proposed resource-sharing methodology.","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"41 1","pages":""},"PeriodicalIF":25.0,"publicationDate":"2026-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146153726","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bioinspired adaptive pupil reflex based on liquid-metal shape-shifters for machine vision 基于液态金属变形器的仿生自适应瞳孔反射机器视觉
IF 25 1区 计算机科学 Q1 ROBOTICS Pub Date : 2026-02-11 DOI: 10.1126/scirobotics.adx0715
Kun Liang, Rui Wang, Gavin Lyda, Anran Zhang, Wanrong Xie, Yihang Wang, Sicheng Xing, Yizhang Wu, Zhibo Zhang, Yihan Liu, Michael D. Dickey, Bowen Zhu, Wubin Bai
Inspired by the evolutionary diversification of biological eyes for environmental adaptation, recently emerged artificial counterparts offer a variety of visual features that can emulate the eyes of humans, insects, fish, eagles, cats, and others. However, grand challenges reside in developing transformational artificial pupils to address drastic environmental change. Here, we propose a bioinspired vision system that integrates a hemispherical imaging array as an artificial retina with liquid-metal shape-shifters as visual neurons and an adaptive artificial pupil to comprehensively simulate visual recognition with closed-loop pupil reflex behavior. The controlled deformation of the liquid metal allows the design of a range of animal pupil shapes, and the rapid switching of short and open circuits simulates biological spike nerve signals. Under strong light, the system adaptively adjusts the pupil deformation of liquid metal to reduce the amount of exposure, which improves the image recognition accuracy of the artificial vision system under high-light conditions and confirms the key characteristics and functions of the artificial vision system, including ultrawide field of view, adaptive adjustment of light, and image recognition functions. The ability to simulate multiple shapes of animal pupils further demonstrates the programmability of the system and highlights its potential for bioinspired robotic systems, advanced machine vision, and autonomous driving.
受生物眼睛适应环境的进化多样化的启发,最近出现的人造眼睛提供了各种视觉特征,可以模仿人类、昆虫、鱼、鹰、猫和其他动物的眼睛。然而,巨大的挑战在于发展转型人工瞳孔以应对剧烈的环境变化。在此,我们提出了一种仿生视觉系统,该系统集成了半球形成像阵列作为人工视网膜,液体金属变形器作为视觉神经元和自适应人工瞳孔,以全面模拟具有闭环瞳孔反射行为的视觉识别。液态金属的可控变形允许设计一系列动物瞳孔形状,短路和开路的快速切换模拟生物尖峰神经信号。在强光下,系统自适应调节液态金属的瞳孔变形,减少曝光量,提高了人工视觉系统在强光条件下的图像识别精度,确认了人工视觉系统的关键特性和功能,包括超宽视场、自适应调节光线、图像识别功能。模拟多种动物瞳孔形状的能力进一步证明了该系统的可编程性,并突出了其在仿生机器人系统、先进机器视觉和自动驾驶方面的潜力。
{"title":"Bioinspired adaptive pupil reflex based on liquid-metal shape-shifters for machine vision","authors":"Kun Liang, Rui Wang, Gavin Lyda, Anran Zhang, Wanrong Xie, Yihang Wang, Sicheng Xing, Yizhang Wu, Zhibo Zhang, Yihan Liu, Michael D. Dickey, Bowen Zhu, Wubin Bai","doi":"10.1126/scirobotics.adx0715","DOIUrl":"https://doi.org/10.1126/scirobotics.adx0715","url":null,"abstract":"Inspired by the evolutionary diversification of biological eyes for environmental adaptation, recently emerged artificial counterparts offer a variety of visual features that can emulate the eyes of humans, insects, fish, eagles, cats, and others. However, grand challenges reside in developing transformational artificial pupils to address drastic environmental change. Here, we propose a bioinspired vision system that integrates a hemispherical imaging array as an artificial retina with liquid-metal shape-shifters as visual neurons and an adaptive artificial pupil to comprehensively simulate visual recognition with closed-loop pupil reflex behavior. The controlled deformation of the liquid metal allows the design of a range of animal pupil shapes, and the rapid switching of short and open circuits simulates biological spike nerve signals. Under strong light, the system adaptively adjusts the pupil deformation of liquid metal to reduce the amount of exposure, which improves the image recognition accuracy of the artificial vision system under high-light conditions and confirms the key characteristics and functions of the artificial vision system, including ultrawide field of view, adaptive adjustment of light, and image recognition functions. The ability to simulate multiple shapes of animal pupils further demonstrates the programmability of the system and highlights its potential for bioinspired robotic systems, advanced machine vision, and autonomous driving.","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"59 1","pages":""},"PeriodicalIF":25.0,"publicationDate":"2026-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146153663","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Visual-tactile pretraining and online multitask learning for humanlike manipulation dexterity 类人操作灵巧性的视触觉预训练和在线多任务学习。
IF 27.5 1区 计算机科学 Q1 ROBOTICS Pub Date : 2026-01-28 DOI: 10.1126/scirobotics.ady2869
Qi Ye, Qingtao Liu, Siyun Wang, Jiaying Chen, Yu Cui, Ke Jin, Huajin Chen, Xuan Cai, Gaofeng Li, Jiming Chen
Achieving humanlike dexterity with anthropomorphic multifingered robotic hands requires precise finger coordination. However, dexterous manipulation remains highly challenging because of high-dimensional action-observation spaces, complex hand-object contact dynamics, and frequent occlusions. To address this, we drew inspiration from the human learning paradigm of observation and practice and propose a two-stage learning framework by learning visual-tactile integration representations via self-supervised learning from human demonstrations. We trained a unified multitask policy through reinforcement learning and online imitation learning. This decoupled learning enabled the robot to acquire generalizable manipulation skills using only monocular images and simple binary tactile signals. With the unified policy, we built a multifingered hand manipulation system that performs multiple complicated tasks with low-cost sensing. It achieved an 85% success rate across five complex tasks and 25 objects and further generalized to three unseen tasks that share similar hand-object coordination patterns with the training tasks.
拟人化的多指机器人手需要精确的手指协调。然而,由于高维的动作观察空间、复杂的手-物体接触动力学和频繁的遮挡,灵巧的操作仍然具有很高的挑战性。为了解决这个问题,我们从人类的观察和实践学习范式中汲取灵感,提出了一个两阶段的学习框架,即通过人类示范的自监督学习来学习视觉-触觉整合表征。我们通过强化学习和在线模仿学习训练了统一的多任务策略。这种解耦学习使机器人仅使用单眼图像和简单的二值触觉信号就能获得泛化的操作技能。利用统一的策略,我们构建了一个多指手操作系统,该系统可以用低成本的传感完成多个复杂任务。它在5个复杂任务和25个物体上取得了85%的成功率,并进一步推广到3个看不见的任务,这些任务与训练任务具有相似的手-物体协调模式。
{"title":"Visual-tactile pretraining and online multitask learning for humanlike manipulation dexterity","authors":"Qi Ye,&nbsp;Qingtao Liu,&nbsp;Siyun Wang,&nbsp;Jiaying Chen,&nbsp;Yu Cui,&nbsp;Ke Jin,&nbsp;Huajin Chen,&nbsp;Xuan Cai,&nbsp;Gaofeng Li,&nbsp;Jiming Chen","doi":"10.1126/scirobotics.ady2869","DOIUrl":"10.1126/scirobotics.ady2869","url":null,"abstract":"<div >Achieving humanlike dexterity with anthropomorphic multifingered robotic hands requires precise finger coordination. However, dexterous manipulation remains highly challenging because of high-dimensional action-observation spaces, complex hand-object contact dynamics, and frequent occlusions. To address this, we drew inspiration from the human learning paradigm of observation and practice and propose a two-stage learning framework by learning visual-tactile integration representations via self-supervised learning from human demonstrations. We trained a unified multitask policy through reinforcement learning and online imitation learning. This decoupled learning enabled the robot to acquire generalizable manipulation skills using only monocular images and simple binary tactile signals. With the unified policy, we built a multifingered hand manipulation system that performs multiple complicated tasks with low-cost sensing. It achieved an 85% success rate across five complex tasks and 25 objects and further generalized to three unseen tasks that share similar hand-object coordination patterns with the training tasks.</div>","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"11 110","pages":""},"PeriodicalIF":27.5,"publicationDate":"2026-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146069929","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Energy efficiency and neural control of continuous versus intermittent swimming in a fishlike robot 鱼状机器人连续与间歇游泳的能量效率和神经控制。
IF 27.5 1区 计算机科学 Q1 ROBOTICS Pub Date : 2026-01-28 DOI: 10.1126/scirobotics.adw7868
Xiangxiao Liu, François A. Longchamp, Luca Zunino, Louis Gevers, Lisa R. Schneider, Selina I. Bothner, André Guignard, Alessandro Crespi, Guillaume Bellegarda, Alexandre Bernardino, Eva A. Naumann, Auke J. Ijspeert
Many aquatic animals, including larval zebrafish, exhibit intermittent locomotion, moving via discrete swimming bouts followed by passive glides rather than continuous movement. However, fundamental questions remain unresolved: What neural mechanisms drive this behavior, and what functional benefits does this behavior offer? Specifically, is intermittent swimming more energy efficient than continuous swimming, and, if so, by what mechanism? Live-animal experiments pose technical challenges, because observing or manipulating internal physiological states in freely swimming animals is difficult. Hence, we developed ZBot, a bioinspired robot that replicates the morphological features of larval zebrafish. Embedding a network model inspired by neural circuits and kinematic recordings of larval zebrafish, ZBot reproduces diverse swimming gaits of larval zebrafish bout-and-glide locomotion. By testing ZBot swimming in both turbulent and viscous flow regimes, we confirm that viscous flow markedly reduces traveled distance but minimally affects turning angles. We further tested ZBot in these regimes to analyze how key parameters (tail-beating frequency and amplitude) influence velocity and power use. Our results show that intermittent swimming lowers the energetic cost of transport across most achievable velocities in both flow regimes. Although prior work linked this efficiency to fluid dynamics, like reduced glide drag, we identify an extra mechanism: better actuator efficiency. Mechanistically, this benefit arises because intermittent locomotion shifts the robot’s actuators to higher inherent efficiency. This work introduces a fishlike robot capable of biomimetic intermittent swimming—with demonstrated energy advantages at relevant speeds—and provides general insights into the factors shaping locomotor behavior and efficiency in aquatic animals.
许多水生动物,包括斑马鱼的幼虫,表现出间歇性的运动,通过离散的游泳运动,然后是被动的滑行,而不是连续的运动。然而,基本的问题仍然没有解决:是什么神经机制驱动了这种行为,这种行为提供了什么功能上的好处?具体来说,间歇游泳是否比连续游泳更节能,如果是的话,是通过什么机制?活体动物实验带来了技术上的挑战,因为观察或操纵自由游泳动物的内部生理状态是困难的。因此,我们开发了ZBot,一个复制斑马鱼幼虫形态特征的仿生机器人。ZBot基于斑马鱼幼体的运动记录和神经回路的启发,嵌入了一个网络模型,再现了斑马鱼幼体游动和滑行运动的不同步态。通过测试ZBot在湍流和粘性流动状态下游泳,我们证实粘性流动显着减少了行驶距离,但对转弯角度的影响最小。我们在这些机制下进一步测试了ZBot,以分析关键参数(拍尾频率和幅度)如何影响速度和功率使用。我们的研究结果表明,在两种流动模式下,间歇游泳降低了在大多数可达到的速度下运输的能量成本。虽然之前的工作将这种效率与流体动力学联系起来,比如减少滑动阻力,但我们发现了一个额外的机制:更好的执行器效率。从机械上讲,这种好处是由于间歇性运动使机器人的执行器具有更高的固有效率。这项工作介绍了一种能够进行仿生间歇游泳的鱼状机器人,在相关速度下具有能量优势,并为水生动物运动行为和效率的形成因素提供了一般见解。
{"title":"Energy efficiency and neural control of continuous versus intermittent swimming in a fishlike robot","authors":"Xiangxiao Liu,&nbsp;François A. Longchamp,&nbsp;Luca Zunino,&nbsp;Louis Gevers,&nbsp;Lisa R. Schneider,&nbsp;Selina I. Bothner,&nbsp;André Guignard,&nbsp;Alessandro Crespi,&nbsp;Guillaume Bellegarda,&nbsp;Alexandre Bernardino,&nbsp;Eva A. Naumann,&nbsp;Auke J. Ijspeert","doi":"10.1126/scirobotics.adw7868","DOIUrl":"10.1126/scirobotics.adw7868","url":null,"abstract":"<div >Many aquatic animals, including larval zebrafish, exhibit intermittent locomotion, moving via discrete swimming bouts followed by passive glides rather than continuous movement. However, fundamental questions remain unresolved: What neural mechanisms drive this behavior, and what functional benefits does this behavior offer? Specifically, is intermittent swimming more energy efficient than continuous swimming, and, if so, by what mechanism? Live-animal experiments pose technical challenges, because observing or manipulating internal physiological states in freely swimming animals is difficult. Hence, we developed ZBot, a bioinspired robot that replicates the morphological features of larval zebrafish. Embedding a network model inspired by neural circuits and kinematic recordings of larval zebrafish, ZBot reproduces diverse swimming gaits of larval zebrafish bout-and-glide locomotion. By testing ZBot swimming in both turbulent and viscous flow regimes, we confirm that viscous flow markedly reduces traveled distance but minimally affects turning angles. We further tested ZBot in these regimes to analyze how key parameters (tail-beating frequency and amplitude) influence velocity and power use. Our results show that intermittent swimming lowers the energetic cost of transport across most achievable velocities in both flow regimes. Although prior work linked this efficiency to fluid dynamics, like reduced glide drag, we identify an extra mechanism: better actuator efficiency. Mechanistically, this benefit arises because intermittent locomotion shifts the robot’s actuators to higher inherent efficiency. This work introduces a fishlike robot capable of biomimetic intermittent swimming—with demonstrated energy advantages at relevant speeds—and provides general insights into the factors shaping locomotor behavior and efficiency in aquatic animals.</div>","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"11 110","pages":""},"PeriodicalIF":27.5,"publicationDate":"2026-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146069933","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Within arm’s reach: A path forward for robot dexterity 触手可及:机器人灵巧的前进之路。
IF 27.5 1区 计算机科学 Q1 ROBOTICS Pub Date : 2026-01-28 DOI: 10.1126/scirobotics.aee5782
Sudharshan Suresh
Visuotactile pretraining with human data leads to robust manipulation policies trained in simulation.
使用人类数据进行视觉预训练,可以在模拟中训练出鲁棒的操作策略。
{"title":"Within arm’s reach: A path forward for robot dexterity","authors":"Sudharshan Suresh","doi":"10.1126/scirobotics.aee5782","DOIUrl":"10.1126/scirobotics.aee5782","url":null,"abstract":"<div >Visuotactile pretraining with human data leads to robust manipulation policies trained in simulation.</div>","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"11 110","pages":""},"PeriodicalIF":27.5,"publicationDate":"2026-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146069927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Is intermittent swimming lazy or clever? 间歇游泳是懒惰还是聪明?
IF 27.5 1区 计算机科学 Q1 ROBOTICS Pub Date : 2026-01-28 DOI: 10.1126/scirobotics.aee3862
Daniel B. Quinn
The motor efficiency of a zebrafish-like robot helps to explain the advantages of burst-and-coast swimming.
类似斑马鱼的机器人的马达效率有助于解释突发性和海岸游泳的优势。
{"title":"Is intermittent swimming lazy or clever?","authors":"Daniel B. Quinn","doi":"10.1126/scirobotics.aee3862","DOIUrl":"10.1126/scirobotics.aee3862","url":null,"abstract":"<div >The motor efficiency of a zebrafish-like robot helps to explain the advantages of burst-and-coast swimming.</div>","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"11 110","pages":""},"PeriodicalIF":27.5,"publicationDate":"2026-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146069928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A self-guided intubation device 一种自动引导插管装置。
IF 27.5 1区 计算机科学 Q1 ROBOTICS Pub Date : 2026-01-21 DOI: 10.1126/scirobotics.aef4218
Amos Matsiko
A soft robotics, self-guided intubation device is capable of fast and safe airway access with minimal user training.
一个软机器人,自我引导插管装置能够快速和安全的气道访问与最少的用户培训。
{"title":"A self-guided intubation device","authors":"Amos Matsiko","doi":"10.1126/scirobotics.aef4218","DOIUrl":"10.1126/scirobotics.aef4218","url":null,"abstract":"<div >A soft robotics, self-guided intubation device is capable of fast and safe airway access with minimal user training.</div>","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"11 110","pages":""},"PeriodicalIF":27.5,"publicationDate":"2026-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146020862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Lightweight haptic ring delivers high force feedback 轻巧的触觉环提供高力反馈。
IF 27.5 1区 计算机科学 Q1 ROBOTICS Pub Date : 2026-01-21 DOI: 10.1126/scirobotics.aef4236
Melisa Yashinski
The OriRing achieves a high power-to-weight ratio with origami-inspired joints powered by a soft pneumatic actuator.
OriRing实现了高功率重量比,由软气动执行器驱动的折纸式关节。
{"title":"Lightweight haptic ring delivers high force feedback","authors":"Melisa Yashinski","doi":"10.1126/scirobotics.aef4236","DOIUrl":"10.1126/scirobotics.aef4236","url":null,"abstract":"<div >The OriRing achieves a high power-to-weight ratio with origami-inspired joints powered by a soft pneumatic actuator.</div>","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"11 110","pages":""},"PeriodicalIF":27.5,"publicationDate":"2026-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146015370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Architectural swarms for responsive façades and creative expression 响应性设计和创造性表达的建筑群
IF 27.5 1区 计算机科学 Q1 ROBOTICS Pub Date : 2026-01-21 DOI: 10.1126/scirobotics.ady7233
Merihan Alhafnawi, Jad Bendarkawi, Yenet Tafesse, Lucia Stein-Montalvo, Azariah Jones, Vicky Chow, Sigrid Adriaenssens, Radhika Nagpal
Living architectures, such as beehives and ant bridges, adapt continuously to their environments through self-organization of swarming agents. In contrast, most human-made architecture remains static, unable to respond to changing climates or occupant needs. Despite advances in biomimicry within architecture, architectural systems still lack the self-organizing dynamics found in natural swarms. In this work, we introduce the concept of architectural swarms: systems that integrate swarm intelligence and robotics into modular architectural façades to enable responsiveness to environmental conditions and human preferences. We present the Swarm Garden, a proof of concept composed of robotic modules called SGbots. Each SGbot features buckling-sheet actuation, sensing, computation, and wireless communication. SGbots can be networked into reconfigurable spatial systems that exhibit collective behavior, forming a testbed for exploring architectural swarm applications. We demonstrate two application case studies. The first explores adaptive shading using self-organization, where SGbots respond to sunlight using a swarm controller based on opinion dynamics. In a 16-SGbot deployment on an office window, the system adapted effectively to sunlight, showing robustness to sensor failures and different climates. Simulations demonstrated scalability and tunability in larger spaces. The second study explores creative expression in interior design, with 36 SGbots responding to human interaction during a public exhibition, including a live dance performance mediated by a wearable device. Results show that the system was engaging and visually compelling, with 96% positive attendee sentiments. The Swarm Garden exemplifies how architectural swarms can transform the built environment, enabling “living-like” architecture for functional and creative applications.
有生命的建筑,如蜂箱和蚁桥,通过群体代理的自组织不断适应环境。相比之下,大多数人造建筑仍然是静态的,无法应对不断变化的气候或居住者的需求。尽管建筑中的仿生学取得了进步,但建筑系统仍然缺乏自然群体中发现的自组织动态。在这项工作中,我们引入了建筑群体的概念:将群体智能和机器人技术集成到模块化建筑设计中的系统,以实现对环境条件和人类偏好的响应。我们展示了蜂群花园,这是一个由机器人模块组成的概念验证,称为SGbots。每个SGbot都具有屈曲片驱动、传感、计算和无线通信功能。sgbot可以联网成可重构的空间系统,展示集体行为,形成探索建筑群应用的测试平台。我们将演示两个应用案例研究。第一个探索使用自组织的自适应阴影,其中sgbot使用基于意见动态的群体控制器响应阳光。在办公室窗户上部署的16个sgbot中,系统有效地适应了阳光,显示出对传感器故障和不同气候的鲁棒性。仿真演示了在较大空间中的可伸缩性和可调性。第二项研究探讨了室内设计中的创造性表达,在公共展览期间,36个sgbot对人类互动做出了回应,包括由可穿戴设备介导的现场舞蹈表演。结果表明,该系统引人入胜,视觉上引人注目,96%的与会者持积极态度。蜂群花园体现了建筑群如何改变建筑环境,使“生活”建筑具有功能性和创造性的应用。
{"title":"Architectural swarms for responsive façades and creative expression","authors":"Merihan Alhafnawi,&nbsp;Jad Bendarkawi,&nbsp;Yenet Tafesse,&nbsp;Lucia Stein-Montalvo,&nbsp;Azariah Jones,&nbsp;Vicky Chow,&nbsp;Sigrid Adriaenssens,&nbsp;Radhika Nagpal","doi":"10.1126/scirobotics.ady7233","DOIUrl":"10.1126/scirobotics.ady7233","url":null,"abstract":"<div >Living architectures, such as beehives and ant bridges, adapt continuously to their environments through self-organization of swarming agents. In contrast, most human-made architecture remains static, unable to respond to changing climates or occupant needs. Despite advances in biomimicry within architecture, architectural systems still lack the self-organizing dynamics found in natural swarms. In this work, we introduce the concept of architectural swarms: systems that integrate swarm intelligence and robotics into modular architectural façades to enable responsiveness to environmental conditions and human preferences. We present the Swarm Garden, a proof of concept composed of robotic modules called SGbots. Each SGbot features buckling-sheet actuation, sensing, computation, and wireless communication. SGbots can be networked into reconfigurable spatial systems that exhibit collective behavior, forming a testbed for exploring architectural swarm applications. We demonstrate two application case studies. The first explores adaptive shading using self-organization, where SGbots respond to sunlight using a swarm controller based on opinion dynamics. In a 16-SGbot deployment on an office window, the system adapted effectively to sunlight, showing robustness to sensor failures and different climates. Simulations demonstrated scalability and tunability in larger spaces. The second study explores creative expression in interior design, with 36 SGbots responding to human interaction during a public exhibition, including a live dance performance mediated by a wearable device. Results show that the system was engaging and visually compelling, with 96% positive attendee sentiments. The Swarm Garden exemplifies how architectural swarms can transform the built environment, enabling “living-like” architecture for functional and creative applications.</div>","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"11 110","pages":""},"PeriodicalIF":27.5,"publicationDate":"2026-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146014884","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning realistic lip motions for humanoid face robots 学习逼真的唇运动人形面部机器人
IF 27.5 1区 计算机科学 Q1 ROBOTICS Pub Date : 2026-01-14 DOI: 10.1126/scirobotics.adx3017
Yuhang Hu, Jiong Lin, Judah Allen Goldfeder, Philippe M. Wyder, Yifeng Cao, Steven Tian, Yunzhe Wang, Jingran Wang, Mengmeng Wang, Jie Zeng, Cameron Mehlman, Yingke Wang, Delin Zeng, Boyuan Chen, Hod Lipson
Lip motion represents outsized importance in human communication, capturing nearly half of our visual attention during conversation. Yet anthropomorphic robots often fail to achieve lip-audio synchronization, resulting in clumsy and lifeless lip behaviors. Two fundamental barriers underlay this challenge. First, robotic lips typically lack the mechanical complexity required to reproduce nuanced human mouth movements; second, existing synchronization methods depend on manually predefined movements and rules, restricting adaptability and realism. Here, we present a humanoid robot face designed to overcome these limitations, featuring soft silicone lips actuated by a 10–degree-of-freedom mechanism. To achieve lip synchronization without predefined movements, we used a self-supervised learning pipeline based on a variational autoencoder (VAE) combined with a facial action transformer, enabling the robot to autonomously infer more realistic lip trajectories directly from speech audio. Our experimental results suggest that this method outperforms simple heuristics like amplitude-based baselines in achieving more visually coherent lip-audio synchronization. Furthermore, the learned synchronization successfully generalizes across multiple linguistic contexts, enabling robot speech articulation in 10 languages unseen during training.
嘴唇动作在人类交流中表现出了超乎寻常的重要性,在谈话中占据了我们近一半的视觉注意力。然而,拟人机器人往往不能实现唇声同步,导致笨拙和毫无生气的嘴唇行为。这一挑战背后有两个基本障碍。首先,机器人的嘴唇通常缺乏复制细微的人类嘴部运动所需的机械复杂性;其次,现有的同步方法依赖于手动预定义的动作和规则,限制了适应性和真实感。在这里,我们提出了一个人形机器人的脸,旨在克服这些限制,具有柔软的硅胶嘴唇,由一个10个自由度的机构驱动。为了在没有预定义动作的情况下实现唇部同步,我们使用了基于变分自编码器(VAE)和面部动作转换器的自监督学习管道,使机器人能够直接从语音音频中自主推断出更真实的唇部轨迹。我们的实验结果表明,这种方法在实现更视觉连贯的唇声同步方面优于简单的启发式方法,如基于幅度的基线。此外,学习到的同步成功地推广到多个语言环境中,使机器人能够在训练中看不到的10种语言中发音。
{"title":"Learning realistic lip motions for humanoid face robots","authors":"Yuhang Hu,&nbsp;Jiong Lin,&nbsp;Judah Allen Goldfeder,&nbsp;Philippe M. Wyder,&nbsp;Yifeng Cao,&nbsp;Steven Tian,&nbsp;Yunzhe Wang,&nbsp;Jingran Wang,&nbsp;Mengmeng Wang,&nbsp;Jie Zeng,&nbsp;Cameron Mehlman,&nbsp;Yingke Wang,&nbsp;Delin Zeng,&nbsp;Boyuan Chen,&nbsp;Hod Lipson","doi":"10.1126/scirobotics.adx3017","DOIUrl":"10.1126/scirobotics.adx3017","url":null,"abstract":"<div >Lip motion represents outsized importance in human communication, capturing nearly half of our visual attention during conversation. Yet anthropomorphic robots often fail to achieve lip-audio synchronization, resulting in clumsy and lifeless lip behaviors. Two fundamental barriers underlay this challenge. First, robotic lips typically lack the mechanical complexity required to reproduce nuanced human mouth movements; second, existing synchronization methods depend on manually predefined movements and rules, restricting adaptability and realism. Here, we present a humanoid robot face designed to overcome these limitations, featuring soft silicone lips actuated by a 10–degree-of-freedom mechanism. To achieve lip synchronization without predefined movements, we used a self-supervised learning pipeline based on a variational autoencoder (VAE) combined with a facial action transformer, enabling the robot to autonomously infer more realistic lip trajectories directly from speech audio. Our experimental results suggest that this method outperforms simple heuristics like amplitude-based baselines in achieving more visually coherent lip-audio synchronization. Furthermore, the learned synchronization successfully generalizes across multiple linguistic contexts, enabling robot speech articulation in 10 languages unseen during training.</div>","PeriodicalId":56029,"journal":{"name":"Science Robotics","volume":"11 110","pages":""},"PeriodicalIF":27.5,"publicationDate":"2026-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145964503","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Science Robotics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1