首页 > 最新文献

Frontiers in Robotics and AI最新文献

英文 中文
Comparative analysis of creative problem solving tasks across age groups using modular cube robotics. 使用模块化立方体机器人的跨年龄组创造性问题解决任务的比较分析。
IF 2.9 Q2 ROBOTICS Pub Date : 2024-12-13 eCollection Date: 2024-01-01 DOI: 10.3389/frobt.2024.1497511
Mehedi Hasan Anik, Margarida Romero

Creative Problem Solving (CPS) is an important competency when using digital artifacts for educational purposes. Using a dual-process approach, this study examines the divergent thinking scores (fluidity, flexibility, and originality) and problem-solving speed in CPS of different age groups. Participants engaged in CreaCube CPS tasks with educational robotics for two consecutive instances, with performance analyzed to explore the influence of prior experience and creative intentions. In the first instance, infants and children demonstrated greater originality compared to seniors, solving problems quickly but with less originality. In the second instance, teens, young adults, and seniors showed enhanced originality. The results highlight trends influenced by prior experience and creative intentions, emphasizing the need for customized instructions with modular robotics to improve CPS across the lifespan.

创造性解决问题(CPS)是一项重要的能力,当使用数字文物的教育目的。本研究采用双过程方法,考察了不同年龄组CPS的发散性思维得分(流动性、灵活性和独创性)和解决问题速度。参与者连续两次使用教育机器人完成CreaCube CPS任务,并分析其表现,以探索先前经验和创造性意图的影响。首先,与老年人相比,婴儿和儿童表现出更强的独创性,他们能迅速解决问题,但缺乏独创性。在第二种情况下,青少年、年轻人和老年人表现出更强的独创性。研究结果强调了受先前经验和创造性意图影响的趋势,强调了模块化机器人定制指令的必要性,以改善整个生命周期的CPS。
{"title":"Comparative analysis of creative problem solving tasks across age groups using modular cube robotics.","authors":"Mehedi Hasan Anik, Margarida Romero","doi":"10.3389/frobt.2024.1497511","DOIUrl":"10.3389/frobt.2024.1497511","url":null,"abstract":"<p><p>Creative Problem Solving (CPS) is an important competency when using digital artifacts for educational purposes. Using a dual-process approach, this study examines the divergent thinking scores (fluidity, flexibility, and originality) and problem-solving speed in CPS of different age groups. Participants engaged in CreaCube CPS tasks with educational robotics for two consecutive instances, with performance analyzed to explore the influence of prior experience and creative intentions. In the first instance, infants and children demonstrated greater originality compared to seniors, solving problems quickly but with less originality. In the second instance, teens, young adults, and seniors showed enhanced originality. The results highlight trends influenced by prior experience and creative intentions, emphasizing the need for customized instructions with modular robotics to improve CPS across the lifespan.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"11 ","pages":"1497511"},"PeriodicalIF":2.9,"publicationDate":"2024-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11671365/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142903865","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploiting passive behaviours for diverse musical playing using the parametric hand. 利用被动行为,使用参数手进行多样化的音乐演奏。
IF 2.9 Q2 ROBOTICS Pub Date : 2024-12-13 eCollection Date: 2024-01-01 DOI: 10.3389/frobt.2024.1463744
Kieran Gilday, Dohyeon Pyeon, S Dhanush, Kyu-Jin Cho, Josie Hughes

Creativity and style in music playing originates from constraints and imperfect interactions between instruments and players. Digital and robotic systems have so far been unable to capture this naturalistic playing. Whether as an additional tool for musicians, function restoration with prosthetics, or artificial intelligence-powered systems, the physical embodiment and interactions generated are critical for expression and connection with an audience. We introduce the parametric hand, which serves as a platform to explore the generation of diverse interactions for the stylistic playing of both pianos and guitars. The hand's anatomical design and non-linear actuation are exploitable with simple kinematic modeling and synergistic actuation. This enables the modulation of two degrees of freedom for piano chord playing and guitar strumming with up to 6.6 times the variation in the signal amplitude. When only varying hand stiffness properties, we achieve capabilities similar to the variation exhibited in human strumming. Finally, we demonstrate the exploitability of behaviours with the rapid programming of posture and stiffness for sequential instrument playing, including guitar pick grasping. In summary, we highlight the utility of embodied intelligence in musical instrument playing through interactive behavioural diversity, as well as the ability to exploit behaviours over this diversity through designed behavioural robustness and synergistic actuation.

音乐演奏的创造力和风格源于乐器和演奏者之间的约束和不完美的互动。到目前为止,数字和机器人系统还无法捕捉到这种自然的游戏。无论是作为音乐家的额外工具,还是作为假肢的功能修复,还是作为人工智能驱动的系统,物理体现和产生的互动对于表达和与观众的联系至关重要。我们引入了参数化手,它作为一个平台来探索钢琴和吉他的不同风格演奏的不同互动的产生。简单的运动学建模和协同驱动可以充分利用手的解剖设计和非线性驱动。这使得两个自由度的调制钢琴和弦演奏和吉他弹奏与高达6.6倍的变化在信号幅度。当仅改变手的刚度属性时,我们实现了类似于人类弹奏中表现出的变化的能力。最后,我们展示了在顺序乐器演奏中使用快速编程的姿势和刚度行为的可利用性,包括吉他拨片抓取。总之,我们强调了具身智能通过互动行为多样性在乐器演奏中的效用,以及通过设计行为鲁棒性和协同驱动来利用这种多样性的行为的能力。
{"title":"Exploiting passive behaviours for diverse musical playing using the parametric hand.","authors":"Kieran Gilday, Dohyeon Pyeon, S Dhanush, Kyu-Jin Cho, Josie Hughes","doi":"10.3389/frobt.2024.1463744","DOIUrl":"10.3389/frobt.2024.1463744","url":null,"abstract":"<p><p>Creativity and style in music playing originates from constraints and imperfect interactions between instruments and players. Digital and robotic systems have so far been unable to capture this naturalistic playing. Whether as an additional tool for musicians, function restoration with prosthetics, or artificial intelligence-powered systems, the physical embodiment and interactions generated are critical for expression and connection with an audience. We introduce the parametric hand, which serves as a platform to explore the generation of diverse interactions for the stylistic playing of both pianos and guitars. The hand's anatomical design and non-linear actuation are exploitable with simple kinematic modeling and synergistic actuation. This enables the modulation of two degrees of freedom for piano chord playing and guitar strumming with up to 6.6 times the variation in the signal amplitude. When only varying hand stiffness properties, we achieve capabilities similar to the variation exhibited in human strumming. Finally, we demonstrate the exploitability of behaviours with the rapid programming of posture and stiffness for sequential instrument playing, including guitar pick grasping. In summary, we highlight the utility of embodied intelligence in musical instrument playing through interactive behavioural diversity, as well as the ability to exploit behaviours over this diversity through designed behavioural robustness and synergistic actuation.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"11 ","pages":"1463744"},"PeriodicalIF":2.9,"publicationDate":"2024-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11671752/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142903871","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fostering children's creativity through LLM-driven storytelling with a social robot. 通过法学硕士驱动的讲故事与社交机器人培养孩子的创造力。
IF 2.9 Q2 ROBOTICS Pub Date : 2024-12-13 eCollection Date: 2024-01-01 DOI: 10.3389/frobt.2024.1457429
Maha Elgarf, Hanan Salam, Christopher Peters

Creativity is an important skill that is known to plummet in children when they start school education that limits their freedom of expression and their imagination. On the other hand, research has shown that integrating social robots into educational settings has the potential to maximize children's learning outcomes. Therefore, our aim in this work was to investigate stimulating children's creativity through child-robot interactions. We fine-tuned a Large Language Model (LLM) to exhibit creative behavior and non-creative behavior in a robot and conducted two studies with children to evaluate the viability of our methods in fostering children's creativity skills. We evaluated creativity in terms of four metrics: fluency, flexibility, elaboration, and originality. We first conducted a study as a storytelling interaction between a child and a wizard-ed social robot in one of two conditions: creative versus non-creative with 38 children. We investigated whether interacting with a creative social robot will elicit more creativity from children. However, we did not find a significant effect of the robot's creativity on children's creative abilities. Second, in an attempt to increase the possibility for the robot to have an impact on children's creativity and to increase the fluidity of the interaction, we produced two models that allow a social agent to autonomously engage with a human in a storytelling context in a creative manner and a non-creative manner respectively. Finally, we conducted another study to evaluate our models by deploying them on a social robot and evaluating them with 103 children. Our results show that children who interacted with the creative autonomous robot were more creative than children who interacted with the non-creative autonomous robot in terms of the fluency, the flexibility, and the elaboration aspects of creativity. The results highlight the difference in children's learning performance when inetracting with a robot operated at different autonomy levels (Wizard of Oz versus autonoumous). Furthermore, they emphasize on the impact of designing adequate robot's behaviors on children's corresponding learning gains in child-robot interactions.

创造力是一项重要的技能,当孩子们开始接受学校教育时,创造力就会直线下降,这限制了他们的表达自由和想象力。另一方面,研究表明,将社交机器人整合到教育环境中有可能最大化儿童的学习成果。因此,我们在这项工作中的目的是研究通过儿童与机器人的互动来激发儿童的创造力。我们对大型语言模型(LLM)进行了微调,以展示机器人的创造性行为和非创造性行为,并对儿童进行了两项研究,以评估我们的方法在培养儿童创造力技能方面的可行性。我们用四个指标来评估创造力:流畅性、灵活性、精细化和独创性。我们首先对38个孩子进行了一项研究,在两种情况下,一个孩子和一个神奇的社交机器人之间进行了讲故事的互动:有创造力和无创造力。我们调查了与有创造力的社交机器人互动是否会激发孩子们更多的创造力。然而,我们并没有发现机器人的创造力对儿童的创造能力有显著的影响。其次,为了增加机器人对儿童创造力产生影响的可能性,并增加互动的流动性,我们制作了两个模型,分别允许社会代理以创造性和非创造性的方式在讲故事的背景下自主地与人类互动。最后,我们进行了另一项研究,通过将我们的模型部署在社交机器人上,并与103名儿童一起对它们进行评估。我们的研究结果表明,与创造性自主机器人互动的儿童在创造力的流畅性、灵活性和阐述方面比与非创造性自主机器人互动的儿童更具创造力。研究结果强调了儿童在与不同自主水平的机器人(《绿野仙踪》与自主机器人)互动时学习表现的差异。此外,他们强调设计适当的机器人行为对儿童在儿童-机器人互动中相应的学习收益的影响。
{"title":"Fostering children's creativity through LLM-driven storytelling with a social robot.","authors":"Maha Elgarf, Hanan Salam, Christopher Peters","doi":"10.3389/frobt.2024.1457429","DOIUrl":"10.3389/frobt.2024.1457429","url":null,"abstract":"<p><p>Creativity is an important skill that is known to plummet in children when they start school education that limits their freedom of expression and their imagination. On the other hand, research has shown that integrating social robots into educational settings has the potential to maximize children's learning outcomes. Therefore, our aim in this work was to investigate stimulating children's creativity through child-robot interactions. We fine-tuned a Large Language Model (LLM) to exhibit creative behavior and non-creative behavior in a robot and conducted two studies with children to evaluate the viability of our methods in fostering children's creativity skills. We evaluated creativity in terms of four metrics: fluency, flexibility, elaboration, and originality. We first conducted a study as a storytelling interaction between a child and a wizard-ed social robot in one of two conditions: creative versus non-creative with 38 children. We investigated whether interacting with a creative social robot will elicit more creativity from children. However, we did not find a significant effect of the robot's creativity on children's creative abilities. Second, in an attempt to increase the possibility for the robot to have an impact on children's creativity and to increase the fluidity of the interaction, we produced two models that allow a social agent to autonomously engage with a human in a storytelling context in a creative manner and a non-creative manner respectively. Finally, we conducted another study to evaluate our models by deploying them on a social robot and evaluating them with 103 children. Our results show that children who interacted with the creative autonomous robot were more creative than children who interacted with the non-creative autonomous robot in terms of the fluency, the flexibility, and the elaboration aspects of creativity. The results highlight the difference in children's learning performance when inetracting with a robot operated at different autonomy levels (Wizard of Oz versus autonoumous). Furthermore, they emphasize on the impact of designing adequate robot's behaviors on children's corresponding learning gains in child-robot interactions.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"11 ","pages":"1457429"},"PeriodicalIF":2.9,"publicationDate":"2024-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11671368/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142903817","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A roadmap for improving data quality through standards for collaborative intelligence in human-robot applications. 通过人机应用中的协作智能标准提高数据质量的路线图。
IF 2.9 Q2 ROBOTICS Pub Date : 2024-12-12 eCollection Date: 2024-01-01 DOI: 10.3389/frobt.2024.1434351
Shakra Mehak, Inês F Ramos, Keerthi Sagar, Aswin Ramasubramanian, John D Kelleher, Michael Guilfoyle, Gabriele Gianini, Ernesto Damiani, Maria Chiara Leva

Collaborative intelligence (CI) involves human-machine interactions and is deemed safety-critical because their reliable interactions are crucial in preventing severe injuries and environmental damage. As these applications become increasingly data-driven, the reliability of CI applications depends on the quality of data, shaping the system's ability to interpret and respond in diverse and often unpredictable environments. In this regard, it is important to adhere to data quality standards and guidelines, thus facilitating the advancement of these collaborative systems in industry. This study presents the challenges of data quality in CI applications within industrial environments, with two use cases that focus on the collection of data in Human-Robot Interaction (HRI). The first use case involves a framework for quantifying human and robot performance within the context of naturalistic robot learning, wherein humans teach robots using intuitive programming methods within the domain of HRI. The second use case presents real-time user state monitoring for adaptive multi-modal teleoperation, that allows for a dynamic adaptation of the system's interface, interaction modality and automation level based on user needs. The article proposes a hybrid standardization derived from established data quality-related ISO standards and addresses the unique challenges associated with multi-modal HRI data acquisition. The use cases presented in this study were carried out as part of an EU-funded project, Collaborative Intelligence for Safety-Critical Systems (CISC).

协作智能(CI)涉及人机交互,被认为是安全关键,因为它们的可靠交互对于防止严重伤害和环境破坏至关重要。随着这些应用程序越来越受数据驱动,CI应用程序的可靠性取决于数据的质量,从而塑造了系统在不同且通常不可预测的环境中解释和响应的能力。在这方面,重要的是要坚持数据质量标准和指导方针,从而促进这些协作系统在工业中的发展。本研究提出了工业环境中CI应用程序中数据质量的挑战,并提供了两个关注人机交互(HRI)中数据收集的用例。第一个用例涉及一个框架,用于在自然主义机器人学习的背景下量化人类和机器人的性能,其中人类在HRI领域内使用直观的编程方法教授机器人。第二个用例为自适应多模态远程操作提供实时用户状态监控,允许根据用户需求动态调整系统界面、交互方式和自动化级别。本文提出了一种源自已建立的与数据质量相关的ISO标准的混合标准化,并解决了与多模式HRI数据采集相关的独特挑战。本研究中提出的用例是作为欧盟资助项目“安全关键系统协同智能”(CISC)的一部分进行的。
{"title":"A roadmap for improving data quality through standards for collaborative intelligence in human-robot applications.","authors":"Shakra Mehak, Inês F Ramos, Keerthi Sagar, Aswin Ramasubramanian, John D Kelleher, Michael Guilfoyle, Gabriele Gianini, Ernesto Damiani, Maria Chiara Leva","doi":"10.3389/frobt.2024.1434351","DOIUrl":"10.3389/frobt.2024.1434351","url":null,"abstract":"<p><p>Collaborative intelligence (CI) involves human-machine interactions and is deemed safety-critical because their reliable interactions are crucial in preventing severe injuries and environmental damage. As these applications become increasingly data-driven, the reliability of CI applications depends on the quality of data, shaping the system's ability to interpret and respond in diverse and often unpredictable environments. In this regard, it is important to adhere to data quality standards and guidelines, thus facilitating the advancement of these collaborative systems in industry. This study presents the challenges of data quality in CI applications within industrial environments, with two use cases that focus on the collection of data in Human-Robot Interaction (HRI). The first use case involves a framework for quantifying human and robot performance within the context of naturalistic robot learning, wherein humans teach robots using intuitive programming methods within the domain of HRI. The second use case presents real-time user state monitoring for adaptive multi-modal teleoperation, that allows for a dynamic adaptation of the system's interface, interaction modality and automation level based on user needs. The article proposes a hybrid standardization derived from established data quality-related ISO standards and addresses the unique challenges associated with multi-modal HRI data acquisition. The use cases presented in this study were carried out as part of an EU-funded project, Collaborative Intelligence for Safety-Critical Systems (CISC).</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"11 ","pages":"1434351"},"PeriodicalIF":2.9,"publicationDate":"2024-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11669550/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142899266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Advancing teleoperation for legged manipulation with wearable motion capture. 推进远程操作的腿操作与可穿戴的动作捕捉。
IF 2.9 Q2 ROBOTICS Pub Date : 2024-12-11 eCollection Date: 2024-01-01 DOI: 10.3389/frobt.2024.1430842
Chengxu Zhou, Yuhui Wan, Christopher Peers, Andromachi Maria Delfaki, Dimitrios Kanoulas

The sanctity of human life mandates the replacement of individuals with robotic systems in the execution of hazardous tasks. Explosive Ordnance Disposal (EOD), a field fraught with mortal danger, stands at the forefront of this transition. In this study, we explore the potential of robotic telepresence as a safeguard for human operatives, drawing on the robust capabilities demonstrated by legged manipulators in diverse operational contexts. The challenge of autonomy in such precarious domains underscores the advantages of teleoperation-a harmonious blend of human intuition and robotic execution. Herein, we introduce a cost-effective telepresence and teleoperation system employing a legged manipulator, which combines a quadruped robot, an integrated manipulative arm, and RGB-D sensory capabilities. Our innovative approach tackles the intricate challenge of whole-body control for a quadrupedal manipulator. The core of our system is an IMU-based motion capture suit, enabling intuitive teleoperation, augmented by immersive visual telepresence via a VR headset. We have empirically validated our integrated system through rigorous real-world applications, focusing on loco-manipulation tasks that necessitate comprehensive robot control and enhanced visual telepresence for EOD operations.

人类生命的神圣性要求机器人系统在执行危险任务时取代个人。爆炸物处理(EOD),一个充满致命危险的领域,站在这一转变的最前沿。在这项研究中,我们探讨了机器人远程呈现作为人类操作人员的保障的潜力,借鉴了腿式机械手在不同操作环境中所展示的强大能力。在如此不稳定的领域实现自主的挑战凸显了远程操作的优势——人类直觉和机器人执行的和谐结合。在此,我们介绍了一种具有成本效益的远程呈现和远程操作系统,该系统采用腿式机械手,结合了四足机器人,集成操纵臂和RGB-D感知能力。我们的创新方法解决了四足机械手全身控制的复杂挑战。我们系统的核心是一个基于imu的动作捕捉套装,实现直观的远程操作,通过VR耳机增强身临其境的视觉远程呈现。我们通过严格的实际应用验证了我们的集成系统,重点关注需要全面机器人控制和增强视觉远程呈现的EOD操作的本地操作任务。
{"title":"Advancing teleoperation for legged manipulation with wearable motion capture.","authors":"Chengxu Zhou, Yuhui Wan, Christopher Peers, Andromachi Maria Delfaki, Dimitrios Kanoulas","doi":"10.3389/frobt.2024.1430842","DOIUrl":"10.3389/frobt.2024.1430842","url":null,"abstract":"<p><p>The sanctity of human life mandates the replacement of individuals with robotic systems in the execution of hazardous tasks. Explosive Ordnance Disposal (EOD), a field fraught with mortal danger, stands at the forefront of this transition. In this study, we explore the potential of robotic telepresence as a safeguard for human operatives, drawing on the robust capabilities demonstrated by legged manipulators in diverse operational contexts. The challenge of autonomy in such precarious domains underscores the advantages of teleoperation-a harmonious blend of human intuition and robotic execution. Herein, we introduce a cost-effective telepresence and teleoperation system employing a legged manipulator, which combines a quadruped robot, an integrated manipulative arm, and RGB-D sensory capabilities. Our innovative approach tackles the intricate challenge of whole-body control for a quadrupedal manipulator. The core of our system is an IMU-based motion capture suit, enabling intuitive teleoperation, augmented by immersive visual telepresence via a VR headset. We have empirically validated our integrated system through rigorous real-world applications, focusing on loco-manipulation tasks that necessitate comprehensive robot control and enhanced visual telepresence for EOD operations.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"11 ","pages":"1430842"},"PeriodicalIF":2.9,"publicationDate":"2024-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11668679/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142899282","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Can a human sing with an unseen artificial partner? Coordination dynamics when singing with an unseen human or artificial partner. 人类能和看不见的人造伙伴一起唱歌吗?与看不见的人或人造伙伴一起唱歌时的协调动态。
IF 2.9 Q2 ROBOTICS Pub Date : 2024-12-09 eCollection Date: 2024-01-01 DOI: 10.3389/frobt.2024.1463477
Rina Nishiyama, Tetsushi Nonaka

This study investigated whether a singer's coordination patterns differ when singing with an unseen human partner versus an unseen artificial partner (VOCALOID 6 voice synthesis software). We used cross-correlation analysis to compare the correlation of the amplitude envelope time series between the partner's and the participant's singing voices. We also conducted a Granger causality test to determine whether the past amplitude envelope of the partner helps predict the future amplitude envelope of the participants, or if the reverse is true. We found more pronounced characteristics of anticipatory synchronization and increased similarity in the unfolding dynamics of the amplitude envelopes in the human-partner condition compared to the artificial-partner condition, despite the tempo fluctuations in the human-partner condition. The results suggested that subtle qualities of the human singing voice, possibly stemming from intrinsic dynamics of the human body, may contain information that enables human agents to align their singing behavior dynamics with a human partner.

这项研究调查了歌手在与一个看不见的人类搭档和一个看不见的人工搭档(VOCALOID 6语音合成软件)唱歌时的协调模式是否不同。我们使用交叉相关分析来比较同伴和参与者的歌声之间的振幅包络时间序列的相关性。我们还进行了格兰杰因果检验,以确定合作伙伴的过去振幅包络是否有助于预测参与者的未来振幅包络,或者反之亦然。我们发现,与人工伴侣条件相比,人类伴侣条件下预期同步的特征更明显,振幅包络展开动力学的相似性增加,尽管人类伴侣条件下的速度波动。研究结果表明,人类歌声的微妙特质可能源于人体的内在动力,可能包含使人类代理人能够将其歌唱行为动态与人类伴侣保持一致的信息。
{"title":"Can a human sing with an unseen artificial partner? Coordination dynamics when singing with an unseen human or artificial partner.","authors":"Rina Nishiyama, Tetsushi Nonaka","doi":"10.3389/frobt.2024.1463477","DOIUrl":"10.3389/frobt.2024.1463477","url":null,"abstract":"<p><p>This study investigated whether a singer's coordination patterns differ when singing with an unseen human partner versus an unseen artificial partner (VOCALOID 6 voice synthesis software). We used cross-correlation analysis to compare the correlation of the amplitude envelope time series between the partner's and the participant's singing voices. We also conducted a Granger causality test to determine whether the past amplitude envelope of the partner helps predict the future amplitude envelope of the participants, or if the reverse is true. We found more pronounced characteristics of anticipatory synchronization and increased similarity in the unfolding dynamics of the amplitude envelopes in the human-partner condition compared to the artificial-partner condition, despite the tempo fluctuations in the human-partner condition. The results suggested that subtle qualities of the human singing voice, possibly stemming from intrinsic dynamics of the human body, may contain information that enables human agents to align their singing behavior dynamics with a human partner.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"11 ","pages":"1463477"},"PeriodicalIF":2.9,"publicationDate":"2024-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11663750/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142883463","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A versatile real-time vision-led runway localisation system for enhanced autonomy. 一个多功能实时视觉导向的跑道定位系统,增强了自主性。
IF 2.9 Q2 ROBOTICS Pub Date : 2024-12-06 eCollection Date: 2024-01-01 DOI: 10.3389/frobt.2024.1490812
Kyriacos Tsapparellas, Nickolay Jelev, Jonathon Waters, Aditya M Shrikhande, Sabine Brunswicker, Lyudmila S Mihaylova

This paper proposes a solution to the challenging task of autonomously landing Unmanned Aerial Vehicles (UAVs). An onboard computer vision module integrates the vision system with the ground control communication and video server connection. The vision platform performs feature extraction using the Speeded Up Robust Features (SURF), followed by fast Structured Forests edge detection and then smoothing with a Kalman filter for accurate runway sidelines prediction. A thorough evaluation is performed over real-world and simulation environments with respect to accuracy and processing time, in comparison with state-of-the-art edge detection approaches. The vision system is validated over videos with clear and difficult weather conditions, including with fog, varying lighting conditions and crosswind landing. The experiments are performed using data from the X-Plane 11 flight simulator and real flight data from the Uncrewed Low-cost TRAnsport (ULTRA) self-flying cargo UAV. The vision-led system can localise the runway sidelines with a Structured Forests approach with an accuracy approximately 84.4%, outperforming the state-of-the-art approaches and delivering real-time performance. The main contribution of this work consists of the developed vision-led system for runway detection to aid autonomous landing of UAVs using electro-optical cameras. Although implemented with the ULTRA UAV, the vision-led system is applicable to any other UAV.

针对无人机自主着陆这一具有挑战性的任务,提出了一种解决方案。机载计算机视觉模块集成了视觉系统与地面控制通信和视频服务器连接。视觉平台使用加速鲁棒特征(SURF)进行特征提取,然后进行快速结构化森林边缘检测,然后使用卡尔曼滤波器进行平滑,以准确预测跑道边线。与最先进的边缘检测方法相比,在真实世界和模拟环境中对准确性和处理时间进行了全面的评估。视觉系统在晴朗和恶劣的天气条件下进行了视频验证,包括大雾、不同的照明条件和侧风着陆。实验使用来自X-Plane 11飞行模拟器的数据和来自无人驾驶低成本运输(ULTRA)自动飞行货运无人机的真实飞行数据进行。视觉主导的系统可以使用结构化森林方法定位跑道边缘,准确率约为84.4%,优于最先进的方法,并提供实时性能。这项工作的主要贡献包括开发用于跑道探测的视觉引导系统,以帮助使用光电摄像机的无人机自主着陆。虽然与ULTRA无人机一起实现,但视觉引导系统适用于任何其他无人机。
{"title":"A versatile real-time vision-led runway localisation system for enhanced autonomy.","authors":"Kyriacos Tsapparellas, Nickolay Jelev, Jonathon Waters, Aditya M Shrikhande, Sabine Brunswicker, Lyudmila S Mihaylova","doi":"10.3389/frobt.2024.1490812","DOIUrl":"10.3389/frobt.2024.1490812","url":null,"abstract":"<p><p>This paper proposes a solution to the challenging task of autonomously landing Unmanned Aerial Vehicles (UAVs). An onboard computer vision module integrates the vision system with the ground control communication and video server connection. The vision platform performs feature extraction using the Speeded Up Robust Features (SURF), followed by fast Structured Forests edge detection and then smoothing with a Kalman filter for accurate runway sidelines prediction. A thorough evaluation is performed over real-world and simulation environments with respect to accuracy and processing time, in comparison with state-of-the-art edge detection approaches. The vision system is validated over videos with clear and difficult weather conditions, including with fog, varying lighting conditions and crosswind landing. The experiments are performed using data from the X-Plane 11 flight simulator and real flight data from the Uncrewed Low-cost TRAnsport (ULTRA) self-flying cargo UAV. The vision-led system can localise the runway sidelines with a Structured Forests approach with an accuracy approximately 84.4%, outperforming the state-of-the-art approaches and delivering real-time performance. The main contribution of this work consists of the developed vision-led system for runway detection to aid autonomous landing of UAVs using electro-optical cameras. Although implemented with the ULTRA UAV, the vision-led system is applicable to any other UAV.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"11 ","pages":"1490812"},"PeriodicalIF":2.9,"publicationDate":"2024-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11660180/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142877889","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Music, body, and machine: gesture-based synchronization in human-robot musical interaction. 音乐、身体和机器:基于手势的人机音乐互动同步。
IF 2.9 Q2 ROBOTICS Pub Date : 2024-12-05 eCollection Date: 2024-01-01 DOI: 10.3389/frobt.2024.1461615
Xuedan Gao, Amit Rogel, Raghavasimhan Sankaranarayanan, Brody Dowling, Gil Weinberg

Musical performance relies on nonverbal cues for conveying information among musicians. Human musicians use bodily gestures to communicate their interpretation and intentions to their collaborators, from mood and expression to anticipatory cues regarding structure and tempo. Robotic Musicians can use their physical bodies in a similar way when interacting with fellow musicians. The paper presents a new theoretical framework to classify musical gestures and a study evaluating the effect of robotic gestures on synchronization between human musicians and Shimon - a robotic marimba player developed at Georgia Tech. Shimon utilizes head and arm movements to signify musical information such as expected notes, tempo, and beat. The study, in which piano players were asked to play along with Shimon, assessed the effectiveness of these gestures on human-robot synchronization. Subjects were evaluated for their ability to synchronize with unknown tempo changes as communicated by Shimon's ancillary and social gestures. The results demonstrate the significant contribution of non-instrumental gestures to human-robot synchronization, highlighting the importance of non-music-making gestures for anticipation and coordination in human-robot musical collaboration. Subjects also indicated more positive feelings when interacting with the robot's ancillary and social gestures, indicating the role of these gestures in supporting engaging and enjoyable musical experiences.

音乐表演依靠非语言线索在音乐家之间传递信息。人类音乐家通过肢体动作向合作者传达他们的理解和意图,包括情绪和表情,以及有关结构和节奏的预期提示。机器人音乐家在与其他音乐家互动时,也能以类似的方式使用自己的身体。本文介绍了一种新的音乐手势分类理论框架,以及一项评估机器人手势对人类音乐家与佐治亚理工学院开发的机器人马林巴琴演奏家 Shimon 之间同步的影响的研究。Shimon 利用头部和手臂动作来表示音乐信息,如预期音符、节奏和节拍。这项研究要求钢琴演奏者与 Shimon 一起演奏,以评估这些手势对人类与机器人同步的有效性。受试者根据 Shimon 的辅助手势和社交手势传达的未知节奏变化进行同步的能力进行了评估。结果表明,非乐器手势对人类与机器人的同步做出了重大贡献,突出了非音乐创作手势在人类与机器人音乐协作中的预测和协调的重要性。受试者还表示,在与机器人的辅助手势和社交手势互动时,他们会有更多积极的感受,这表明这些手势在支持引人入胜和愉快的音乐体验方面发挥了作用。
{"title":"Music, body, and machine: gesture-based synchronization in human-robot musical interaction.","authors":"Xuedan Gao, Amit Rogel, Raghavasimhan Sankaranarayanan, Brody Dowling, Gil Weinberg","doi":"10.3389/frobt.2024.1461615","DOIUrl":"10.3389/frobt.2024.1461615","url":null,"abstract":"<p><p>Musical performance relies on nonverbal cues for conveying information among musicians. Human musicians use bodily gestures to communicate their interpretation and intentions to their collaborators, from mood and expression to anticipatory cues regarding structure and tempo. Robotic Musicians can use their physical bodies in a similar way when interacting with fellow musicians. The paper presents a new theoretical framework to classify musical gestures and a study evaluating the effect of robotic gestures on synchronization between human musicians and Shimon - a robotic marimba player developed at Georgia Tech. Shimon utilizes head and arm movements to signify musical information such as expected notes, tempo, and beat. The study, in which piano players were asked to play along with Shimon, assessed the effectiveness of these gestures on human-robot synchronization. Subjects were evaluated for their ability to synchronize with unknown tempo changes as communicated by Shimon's ancillary and social gestures. The results demonstrate the significant contribution of non-instrumental gestures to human-robot synchronization, highlighting the importance of non-music-making gestures for anticipation and coordination in human-robot musical collaboration. Subjects also indicated more positive feelings when interacting with the robot's ancillary and social gestures, indicating the role of these gestures in supporting engaging and enjoyable musical experiences.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"11 ","pages":"1461615"},"PeriodicalIF":2.9,"publicationDate":"2024-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11655300/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142865916","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multiple-agent promotion in a grocery store: effects of modality and variability of agents on customer memory. 杂货店的多代理促销:代理的形式和可变性对顾客记忆的影响。
IF 2.9 Q2 ROBOTICS Pub Date : 2024-12-05 eCollection Date: 2024-01-01 DOI: 10.3389/frobt.2024.1397230
Takato Mizuho, Yuki Okafuji, Jun Baba, Takuji Narumi

The use of social robots for product advertising is becoming prevalent. Previous studies have demonstrated that social robots can positively impact ad hoc sales recommendations. However, the essential question of "how effectively customers remember the advertised content" remains unexplored. To address this gap, we conducted a field study where physical robots or virtual agents were stationed at two locations within a grocery store for product promotion. Based on prior research, we hypothesized that customers would exhibit better recall of promotional content when it is heard from different agents rather than the same agent. Moreover, we posited that customers would exhibit more favorable social attitudes toward physical robots than virtual agents, resulting in enhanced recall. The results did not support our hypotheses, as no significant differences were observed between the conditions. However, when the physical robot was used, we observed a significant positive correlation between subjective ratings such as social presence and recall performance. This trend was not evident when the virtual agent was used. This study is a stepping stone for future research evaluating agent-based product promotion in terms of customer memory.

在产品广告中使用社交机器人正变得越来越普遍。先前的研究表明,社交机器人可以对特别的销售建议产生积极影响。然而,“消费者如何有效地记住广告内容”这一根本问题仍未得到探讨。为了解决这一差距,我们进行了一项实地研究,将物理机器人或虚拟代理部署在杂货店内的两个地点进行产品推广。基于之前的研究,我们假设当促销内容来自不同的代理商而不是同一代理商时,顾客会表现出更好的回忆。此外,我们假设客户会对实体机器人表现出比虚拟代理更有利的社会态度,从而提高召回率。结果不支持我们的假设,因为在两种情况之间没有观察到显著的差异。然而,当使用实体机器人时,我们观察到主观评分(如社交存在)与回忆表现之间存在显著的正相关。当使用虚拟代理时,这种趋势并不明显。本研究为未来评估基于代理的产品促销在顾客记忆方面的研究奠定了基础。
{"title":"Multiple-agent promotion in a grocery store: effects of modality and variability of agents on customer memory.","authors":"Takato Mizuho, Yuki Okafuji, Jun Baba, Takuji Narumi","doi":"10.3389/frobt.2024.1397230","DOIUrl":"10.3389/frobt.2024.1397230","url":null,"abstract":"<p><p>The use of social robots for product advertising is becoming prevalent. Previous studies have demonstrated that social robots can positively impact <i>ad hoc</i> sales recommendations. However, the essential question of \"how effectively customers remember the advertised content\" remains unexplored. To address this gap, we conducted a field study where physical robots or virtual agents were stationed at two locations within a grocery store for product promotion. Based on prior research, we hypothesized that customers would exhibit better recall of promotional content when it is heard from different agents rather than the same agent. Moreover, we posited that customers would exhibit more favorable social attitudes toward physical robots than virtual agents, resulting in enhanced recall. The results did not support our hypotheses, as no significant differences were observed between the conditions. However, when the physical robot was used, we observed a significant positive correlation between subjective ratings such as social presence and recall performance. This trend was not evident when the virtual agent was used. This study is a stepping stone for future research evaluating agent-based product promotion in terms of customer memory.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"11 ","pages":"1397230"},"PeriodicalIF":2.9,"publicationDate":"2024-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11655321/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142865915","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
L-AVATeD: The lidar and visual walking terrain dataset. L-AVATeD:激光雷达和视觉行走地形数据集。
IF 2.9 Q2 ROBOTICS Pub Date : 2024-12-04 eCollection Date: 2024-01-01 DOI: 10.3389/frobt.2024.1384575
David Whipps, Patrick Ippersiel, Philippe C Dixon
{"title":"L-AVATeD: The lidar and visual walking terrain dataset.","authors":"David Whipps, Patrick Ippersiel, Philippe C Dixon","doi":"10.3389/frobt.2024.1384575","DOIUrl":"10.3389/frobt.2024.1384575","url":null,"abstract":"","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"11 ","pages":"1384575"},"PeriodicalIF":2.9,"publicationDate":"2024-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11653013/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142856117","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Frontiers in Robotics and AI
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1