Pub Date : 2024-04-25DOI: 10.3389/frobt.2024.1362463
Kosuke Sasaki, Jumpei Nishikawa, Junya Morita
The condition for artificial agents to possess perceivable intentions can be considered that they have resolved a form of the symbol grounding problem. Here, the symbol grounding is considered an achievement of the state where the language used by the agent is endowed with some quantitative meaning extracted from the physical world. To achieve this type of symbol grounding, we adopt a method for characterizing robot gestures with quantitative meaning calculated from word-distributed representations constructed from a large corpus of text. In this method, a “size image” of a word is generated by defining an axis (index) that discriminates the “size” of the word in the word-distributed vector space. The generated size images are converted into gestures generated by a physical artificial agent (robot). The robot’s gesture can be set to reflect either the size of the word in terms of the amount of movement or in terms of its posture. To examine the perception of communicative intention in the robot that performs the gestures generated as described above, the authors examine human ratings on “the naturalness” obtained through an online survey, yielding results that partially validate our proposed method. Based on the results, the authors argue for the possibility of developing advanced artifacts that achieve human-like symbolic grounding.
{"title":"Evaluation of co-speech gestures grounded in word-distributed representation","authors":"Kosuke Sasaki, Jumpei Nishikawa, Junya Morita","doi":"10.3389/frobt.2024.1362463","DOIUrl":"https://doi.org/10.3389/frobt.2024.1362463","url":null,"abstract":"The condition for artificial agents to possess perceivable intentions can be considered that they have resolved a form of the symbol grounding problem. Here, the symbol grounding is considered an achievement of the state where the language used by the agent is endowed with some quantitative meaning extracted from the physical world. To achieve this type of symbol grounding, we adopt a method for characterizing robot gestures with quantitative meaning calculated from word-distributed representations constructed from a large corpus of text. In this method, a “size image” of a word is generated by defining an axis (index) that discriminates the “size” of the word in the word-distributed vector space. The generated size images are converted into gestures generated by a physical artificial agent (robot). The robot’s gesture can be set to reflect either the size of the word in terms of the amount of movement or in terms of its posture. To examine the perception of communicative intention in the robot that performs the gestures generated as described above, the authors examine human ratings on “the naturalness” obtained through an online survey, yielding results that partially validate our proposed method. Based on the results, the authors argue for the possibility of developing advanced artifacts that achieve human-like symbolic grounding.","PeriodicalId":504612,"journal":{"name":"Frontiers in Robotics and AI","volume":"8 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140653749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-24DOI: 10.3389/frobt.2024.1289414
Anastasia K. Ostrowski, Jennifer Zhang, Cynthia Breazeal, Hae Won Park
Introduction: Older adults are engaging more and more with voice-based agent and social robot technologies, and roboticists are increasingly designing interactions for these systems with older adults in mind. Older adults are often not included in these design processes, yet there are many opportunities for older adults to collaborate with design teams to design future robot interactions and help guide directions for robot development.Methods: Through a year-long co-design project, we collaborated with 28 older adults to understand the key focus areas that older adults see promise in for older adult-robot interaction in their everyday lives and how they would like these interactions to be designed. This paper describes and explores the robot-interaction guidelines and future directions identified by older adults, specifically investigating the change and trajectory of these guidelines through the course of the co-design process from the initial interview to the design guideline generation session to the final interview. Results were analyzed through an adapted ethnographic decision tree modeling approach to understand older adults’ decision making surrounding the various focus areas and guidelines for social robots.Results: Overall, over the course of the co-design process between the beginning and end, older adults developed a better understanding of the robot that translated to them being more certain of their attitudes of how they would like a robot to engage with them in their lives. Older adults were more accepting of transactional functions such as reminders and scheduling and less open to functions that would involve sharing sensitive information and tracking and/or monitoring of them, expressing concerns around surveillance. There was some promise in robot interactions for connecting with others, body signal monitoring, and emotional wellness, though older adults brought up concerns around autonomy, privacy, and naturalness of the interaction with a robot that need to be further explored.Discussion: This work provides guidance for future interaction development for robots that are being designed to interact with older adults and highlights areas that need to be further investigated with older adults to understand how best to design for user concerns.
{"title":"Promising directions for human-robot interactions defined by older adults","authors":"Anastasia K. Ostrowski, Jennifer Zhang, Cynthia Breazeal, Hae Won Park","doi":"10.3389/frobt.2024.1289414","DOIUrl":"https://doi.org/10.3389/frobt.2024.1289414","url":null,"abstract":"Introduction: Older adults are engaging more and more with voice-based agent and social robot technologies, and roboticists are increasingly designing interactions for these systems with older adults in mind. Older adults are often not included in these design processes, yet there are many opportunities for older adults to collaborate with design teams to design future robot interactions and help guide directions for robot development.Methods: Through a year-long co-design project, we collaborated with 28 older adults to understand the key focus areas that older adults see promise in for older adult-robot interaction in their everyday lives and how they would like these interactions to be designed. This paper describes and explores the robot-interaction guidelines and future directions identified by older adults, specifically investigating the change and trajectory of these guidelines through the course of the co-design process from the initial interview to the design guideline generation session to the final interview. Results were analyzed through an adapted ethnographic decision tree modeling approach to understand older adults’ decision making surrounding the various focus areas and guidelines for social robots.Results: Overall, over the course of the co-design process between the beginning and end, older adults developed a better understanding of the robot that translated to them being more certain of their attitudes of how they would like a robot to engage with them in their lives. Older adults were more accepting of transactional functions such as reminders and scheduling and less open to functions that would involve sharing sensitive information and tracking and/or monitoring of them, expressing concerns around surveillance. There was some promise in robot interactions for connecting with others, body signal monitoring, and emotional wellness, though older adults brought up concerns around autonomy, privacy, and naturalness of the interaction with a robot that need to be further explored.Discussion: This work provides guidance for future interaction development for robots that are being designed to interact with older adults and highlights areas that need to be further investigated with older adults to understand how best to design for user concerns.","PeriodicalId":504612,"journal":{"name":"Frontiers in Robotics and AI","volume":"50 8","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140664312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-24DOI: 10.3389/frobt.2024.1328467
A. Arora, Amit Arora, K. Sivakumar, John R. McIntyre
Social-educational robotics, such as NAO humanoid robots with social, anthropomorphic, humanlike features, are tools for learning, education, and addressing developmental disorders (e.g., autism spectrum disorder or ASD) through social and collaborative robotic interactions and interventions. There are significant gaps at the intersection of social robotics and autism research dealing with how robotic technology helps ASD individuals with their social, emotional, and communication needs, and supports teachers who engage with ASD students. This research aims to (a) obtain new scientific knowledge on social-educational robotics by exploring the usage of social robots (especially humanoids) and robotic interventions with ASD students at high schools through an ASD student–teacher co-working with social robot–social robotic interactions triad framework; (b) utilize Business Model Canvas (BMC) methodology for robot design and curriculum development targeted at ASD students; and (c) connect interdisciplinary areas of consumer behavior research, social robotics, and human-robot interaction using customer discovery interviews for bridging the gap between academic research on social robotics on the one hand, and industry development and customers on the other. The customer discovery process in this research results in eight core research propositions delineating the contexts that enable a higher quality learning environment corresponding with ASD students’ learning requirements through the use of social robots and preparing them for future learning and workforce environments.
{"title":"Managing social-educational robotics for students with autism spectrum disorder through business model canvas and customer discovery","authors":"A. Arora, Amit Arora, K. Sivakumar, John R. McIntyre","doi":"10.3389/frobt.2024.1328467","DOIUrl":"https://doi.org/10.3389/frobt.2024.1328467","url":null,"abstract":"Social-educational robotics, such as NAO humanoid robots with social, anthropomorphic, humanlike features, are tools for learning, education, and addressing developmental disorders (e.g., autism spectrum disorder or ASD) through social and collaborative robotic interactions and interventions. There are significant gaps at the intersection of social robotics and autism research dealing with how robotic technology helps ASD individuals with their social, emotional, and communication needs, and supports teachers who engage with ASD students. This research aims to (a) obtain new scientific knowledge on social-educational robotics by exploring the usage of social robots (especially humanoids) and robotic interventions with ASD students at high schools through an ASD student–teacher co-working with social robot–social robotic interactions triad framework; (b) utilize Business Model Canvas (BMC) methodology for robot design and curriculum development targeted at ASD students; and (c) connect interdisciplinary areas of consumer behavior research, social robotics, and human-robot interaction using customer discovery interviews for bridging the gap between academic research on social robotics on the one hand, and industry development and customers on the other. The customer discovery process in this research results in eight core research propositions delineating the contexts that enable a higher quality learning environment corresponding with ASD students’ learning requirements through the use of social robots and preparing them for future learning and workforce environments.","PeriodicalId":504612,"journal":{"name":"Frontiers in Robotics and AI","volume":"63 7","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140664741","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-24DOI: 10.3389/frobt.2024.1256937
Sarah Rose Siskind, Eric Nichols, Randy Gomez
A magician’s trick and a chatbot conversation have something in common: most of their audiences do not know how they work. Both are also constrained by their own limitations: magicians by the constraints of biology and physics, and dialogue systems by the status of current technology. Magicians and chatbot creators also share a goal: they want to engage their audience. But magicians, unlike the designers of dialogue systems, have centuries of practice in gracefully skirting limitations in order to engage their audience and enhance a sense of awe. In this paper, we look at these practices and identify several key principles of magic and psychology to apply to conversations between chatbots and humans. We formulate a model of communication centered on controlling the user’s attention, expectations, decisions, and memory based on examples from the history of magic. We apply these magic principles to real-world conversations between humans and a social robot and evaluate their effectiveness in a Magical conversation setting compared to a Control conversation that does not incorporate magic principles. We find that human evaluators preferred interactions that incorporated magical principles over interactions that did not. In particular, magical interactions increased 1) the personalization of experience, 2) user engagement, and 3) character likability. Firstly, the magical experience was “personalized.” According to survey results, the magical conversation demonstrated a statistically significant increase in “emotional connection” and “robot familiarity.” Therefore, the personalization of the experience leads to higher levels of perceived impressiveness and emotional connection. Secondly, in the Magical conversation, we find that the human interlocutor is perceived to have statistically-significantly higher engagement levels in four of seven characteristics. Thirdly, participants judged the robot in the magical conversation to have a significantly greater degree of “energeticness,”“humorousness,” and “interestingness.” Finally, evaluation of the conversations with questions intended to measure contribution of the magical principals showed statistically-significant differences for five out of nine principles, indicating a positive contribution of the magical principles to the perceived conversation experience. Overall, our evaluation demonstrates that the psychological principles underlying a magician’s showmanship can be applied to the design of conversational systems to achieve more personalized, engaging, and fun interactions.
{"title":"Would you be impressed: applying principles of magic to chatbot conversations","authors":"Sarah Rose Siskind, Eric Nichols, Randy Gomez","doi":"10.3389/frobt.2024.1256937","DOIUrl":"https://doi.org/10.3389/frobt.2024.1256937","url":null,"abstract":"A magician’s trick and a chatbot conversation have something in common: most of their audiences do not know how they work. Both are also constrained by their own limitations: magicians by the constraints of biology and physics, and dialogue systems by the status of current technology. Magicians and chatbot creators also share a goal: they want to engage their audience. But magicians, unlike the designers of dialogue systems, have centuries of practice in gracefully skirting limitations in order to engage their audience and enhance a sense of awe. In this paper, we look at these practices and identify several key principles of magic and psychology to apply to conversations between chatbots and humans. We formulate a model of communication centered on controlling the user’s attention, expectations, decisions, and memory based on examples from the history of magic. We apply these magic principles to real-world conversations between humans and a social robot and evaluate their effectiveness in a Magical conversation setting compared to a Control conversation that does not incorporate magic principles. We find that human evaluators preferred interactions that incorporated magical principles over interactions that did not. In particular, magical interactions increased 1) the personalization of experience, 2) user engagement, and 3) character likability. Firstly, the magical experience was “personalized.” According to survey results, the magical conversation demonstrated a statistically significant increase in “emotional connection” and “robot familiarity.” Therefore, the personalization of the experience leads to higher levels of perceived impressiveness and emotional connection. Secondly, in the Magical conversation, we find that the human interlocutor is perceived to have statistically-significantly higher engagement levels in four of seven characteristics. Thirdly, participants judged the robot in the magical conversation to have a significantly greater degree of “energeticness,”“humorousness,” and “interestingness.” Finally, evaluation of the conversations with questions intended to measure contribution of the magical principals showed statistically-significant differences for five out of nine principles, indicating a positive contribution of the magical principles to the perceived conversation experience. Overall, our evaluation demonstrates that the psychological principles underlying a magician’s showmanship can be applied to the design of conversational systems to achieve more personalized, engaging, and fun interactions.","PeriodicalId":504612,"journal":{"name":"Frontiers in Robotics and AI","volume":"100 10","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140659220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-24DOI: 10.3389/frobt.2024.1358978
Giuseppe Cotugno, Rafael Afonso Rodrigues, Graham Deacon, J. Konstantinova
As the area of robotics achieves promising results, there is an increasing need to scale robotic software architectures towards real-world domains. Traditionally, robotic architectures are integrated using common frameworks, such as ROS. Therefore, systems with a uniform structure are produced, making it difficult to integrate third party contributions. Virtualisation technologies can simplify the problem, but their use is uncommon in robotics and general integration procedures are still missing. This paper proposes and evaluates a containerised approach for designing and integrating multiform robotic architectures. Our approach aims at augmenting preexisting architectures by including third party contributions. The integration complexity and computational performance of our approach is benchmarked on the EU H2020 SecondHands robotic architecture. Results demonstrate that our approach grants simplicity and flexibility of setup when compared to a non-virtualised version. The computational overhead of using our approach is negligible as resources were optimally exploited.
{"title":"A containerised approach for multiform robotic applications","authors":"Giuseppe Cotugno, Rafael Afonso Rodrigues, Graham Deacon, J. Konstantinova","doi":"10.3389/frobt.2024.1358978","DOIUrl":"https://doi.org/10.3389/frobt.2024.1358978","url":null,"abstract":"As the area of robotics achieves promising results, there is an increasing need to scale robotic software architectures towards real-world domains. Traditionally, robotic architectures are integrated using common frameworks, such as ROS. Therefore, systems with a uniform structure are produced, making it difficult to integrate third party contributions. Virtualisation technologies can simplify the problem, but their use is uncommon in robotics and general integration procedures are still missing. This paper proposes and evaluates a containerised approach for designing and integrating multiform robotic architectures. Our approach aims at augmenting preexisting architectures by including third party contributions. The integration complexity and computational performance of our approach is benchmarked on the EU H2020 SecondHands robotic architecture. Results demonstrate that our approach grants simplicity and flexibility of setup when compared to a non-virtualised version. The computational overhead of using our approach is negligible as resources were optimally exploited.","PeriodicalId":504612,"journal":{"name":"Frontiers in Robotics and AI","volume":"45 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140663268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-22DOI: 10.3389/frobt.2024.1287446
Nicole Dvorak, Zekun Liu, P. Mouthuy
A key objective of tissue engineering (TE) is to produce in vitro funcional grafts that can replace damaged tissues or organs in patients. TE uses bioreactors, which are controlled environments, allowing the application of physical and biochemical cues to relevant cells growing in biomaterials. For soft musculoskeletal (MSK) tissues such as tendons, ligaments and cartilage, it is now well established that applied mechanical stresses can be incorporated into those bioreactor systems to support tissue growth and maturation via activation of mechanotransduction pathways. However, mechanical stresses applied in the laboratory are often oversimplified compared to those found physiologically and may be a factor in the slow progression of engineered MSK grafts towards the clinic. In recent years, an increasing number of studies have focused on the application of complex loading conditions, applying stresses of different types and direction on tissue constructs, in order to better mimic the cellular environment experienced in vivo. Such studies have highlighted the need to improve upon traditional rigid bioreactors, which are often limited to uniaxial loading, to apply physiologically relevant multiaxial stresses and elucidate their influence on tissue maturation. To address this need, soft bioreactors have emerged. They employ one or more soft components, such as flexible soft chambers that can twist and bend with actuation, soft compliant actuators that can bend with the construct, and soft sensors which record measurements in situ. This review examines types of traditional rigid bioreactors and their shortcomings, and highlights recent advances of soft bioreactors in MSK TE. Challenges and future applications of such systems are discussed, drawing attention to the exciting prospect of these platforms and their ability to aid development of functional soft tissue engineered grafts.
组织工程(TE)的一个主要目标是制造体外功能移植物,以替代病人体内受损的组织或器官。组织工程学使用生物反应器,这是一种可控环境,可对在生物材料中生长的相关细胞施加物理和生物化学反应。对于肌腱、韧带和软骨等软肌肉骨骼(MSK)组织而言,目前已经明确的是,可以将外加机械应力纳入这些生物反应器系统,通过激活机械传导途径来支持组织的生长和成熟。然而,与生理学上的机械应力相比,实验室中应用的机械应力往往过于简单,这可能是导致工程化 MSK 移植物在临床上进展缓慢的一个因素。近年来,越来越多的研究侧重于应用复杂的加载条件,对组织构建体施加不同类型和方向的应力,以更好地模拟体内细胞环境。这些研究强调了改进传统刚性生物反应器(通常仅限于单轴加载)的必要性,以应用与生理相关的多轴应力并阐明其对组织成熟的影响。为了满足这一需求,软性生物反应器应运而生。它们采用了一个或多个软部件,例如可随驱动扭转和弯曲的柔性软室、可随构建体弯曲的软顺应致动器以及可在原位记录测量结果的软传感器。本综述探讨了传统刚性生物反应器的类型及其缺点,并重点介绍了 MSK TE 中软性生物反应器的最新进展。文章讨论了此类系统面临的挑战和未来的应用,提请人们关注这些平台令人振奋的前景及其帮助开发功能性软组织工程移植物的能力。
{"title":"Soft bioreactor systems: a necessary step toward engineered MSK soft tissue?","authors":"Nicole Dvorak, Zekun Liu, P. Mouthuy","doi":"10.3389/frobt.2024.1287446","DOIUrl":"https://doi.org/10.3389/frobt.2024.1287446","url":null,"abstract":"A key objective of tissue engineering (TE) is to produce in vitro funcional grafts that can replace damaged tissues or organs in patients. TE uses bioreactors, which are controlled environments, allowing the application of physical and biochemical cues to relevant cells growing in biomaterials. For soft musculoskeletal (MSK) tissues such as tendons, ligaments and cartilage, it is now well established that applied mechanical stresses can be incorporated into those bioreactor systems to support tissue growth and maturation via activation of mechanotransduction pathways. However, mechanical stresses applied in the laboratory are often oversimplified compared to those found physiologically and may be a factor in the slow progression of engineered MSK grafts towards the clinic. In recent years, an increasing number of studies have focused on the application of complex loading conditions, applying stresses of different types and direction on tissue constructs, in order to better mimic the cellular environment experienced in vivo. Such studies have highlighted the need to improve upon traditional rigid bioreactors, which are often limited to uniaxial loading, to apply physiologically relevant multiaxial stresses and elucidate their influence on tissue maturation. To address this need, soft bioreactors have emerged. They employ one or more soft components, such as flexible soft chambers that can twist and bend with actuation, soft compliant actuators that can bend with the construct, and soft sensors which record measurements in situ. This review examines types of traditional rigid bioreactors and their shortcomings, and highlights recent advances of soft bioreactors in MSK TE. Challenges and future applications of such systems are discussed, drawing attention to the exciting prospect of these platforms and their ability to aid development of functional soft tissue engineered grafts.","PeriodicalId":504612,"journal":{"name":"Frontiers in Robotics and AI","volume":"27 5","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140673379","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-18DOI: 10.3389/frobt.2024.1324404
Eugene R. Rush, Christoffer Heckman, Kaushik Jayaram, J. S. Humbert
Legged robot control has improved in recent years with the rise of deep reinforcement learning, however, much of the underlying neural mechanisms remain difficult to interpret. Our aim is to leverage bio-inspired methods from computational neuroscience to better understand the neural activity of robust robot locomotion controllers. Similar to past work, we observe that terrain-based curriculum learning improves agent stability. We study the biomechanical responses and neural activity within our neural network controller by simultaneously pairing physical disturbances with targeted neural ablations. We identify an agile hip reflex that enables the robot to regain its balance and recover from lateral perturbations. Model gradients are employed to quantify the relative degree that various sensory feedback channels drive this reflexive behavior. We also find recurrent dynamics are implicated in robust behavior, and utilize sampling-based ablation methods to identify these key neurons. Our framework combines model-based and sampling-based methods for drawing causal relationships between neural network activity and robust embodied robot behavior.
{"title":"Neural dynamics of robust legged robots","authors":"Eugene R. Rush, Christoffer Heckman, Kaushik Jayaram, J. S. Humbert","doi":"10.3389/frobt.2024.1324404","DOIUrl":"https://doi.org/10.3389/frobt.2024.1324404","url":null,"abstract":"Legged robot control has improved in recent years with the rise of deep reinforcement learning, however, much of the underlying neural mechanisms remain difficult to interpret. Our aim is to leverage bio-inspired methods from computational neuroscience to better understand the neural activity of robust robot locomotion controllers. Similar to past work, we observe that terrain-based curriculum learning improves agent stability. We study the biomechanical responses and neural activity within our neural network controller by simultaneously pairing physical disturbances with targeted neural ablations. We identify an agile hip reflex that enables the robot to regain its balance and recover from lateral perturbations. Model gradients are employed to quantify the relative degree that various sensory feedback channels drive this reflexive behavior. We also find recurrent dynamics are implicated in robust behavior, and utilize sampling-based ablation methods to identify these key neurons. Our framework combines model-based and sampling-based methods for drawing causal relationships between neural network activity and robust embodied robot behavior.","PeriodicalId":504612,"journal":{"name":"Frontiers in Robotics and AI","volume":" 3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140688703","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-17DOI: 10.3389/frobt.2024.1362735
Daniel Polyakov, Peter A. Robinson, Eli J. Muller, Oren Shriki
We introduce a novel approach to training data augmentation in brain–computer interfaces (BCIs) using neural field theory (NFT) applied to EEG data from motor imagery tasks. BCIs often suffer from limited accuracy due to a limited amount of training data. To address this, we leveraged a corticothalamic NFT model to generate artificial EEG time series as supplemental training data. We employed the BCI competition IV ‘2a’ dataset to evaluate this augmentation technique. For each individual, we fitted the model to common spatial patterns of each motor imagery class, jittered the fitted parameters, and generated time series for data augmentation. Our method led to significant accuracy improvements of over 2% in classifying the “total power” feature, but not in the case of the “Higuchi fractal dimension” feature. This suggests that the fit NFT model may more favorably represent one feature than the other. These findings pave the way for further exploration of NFT-based data augmentation, highlighting the benefits of biophysically accurate artificial data.
{"title":"Recruiting neural field theory for data augmentation in a motor imagery brain–computer interface","authors":"Daniel Polyakov, Peter A. Robinson, Eli J. Muller, Oren Shriki","doi":"10.3389/frobt.2024.1362735","DOIUrl":"https://doi.org/10.3389/frobt.2024.1362735","url":null,"abstract":"We introduce a novel approach to training data augmentation in brain–computer interfaces (BCIs) using neural field theory (NFT) applied to EEG data from motor imagery tasks. BCIs often suffer from limited accuracy due to a limited amount of training data. To address this, we leveraged a corticothalamic NFT model to generate artificial EEG time series as supplemental training data. We employed the BCI competition IV ‘2a’ dataset to evaluate this augmentation technique. For each individual, we fitted the model to common spatial patterns of each motor imagery class, jittered the fitted parameters, and generated time series for data augmentation. Our method led to significant accuracy improvements of over 2% in classifying the “total power” feature, but not in the case of the “Higuchi fractal dimension” feature. This suggests that the fit NFT model may more favorably represent one feature than the other. These findings pave the way for further exploration of NFT-based data augmentation, highlighting the benefits of biophysically accurate artificial data.","PeriodicalId":504612,"journal":{"name":"Frontiers in Robotics and AI","volume":"85 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140693866","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-16DOI: 10.3389/frobt.2024.1229026
Volker Gabler, Dirk Wollherr
Introduction: Multi-agent systems are an interdisciplinary research field that describes the concept of multiple decisive individuals interacting with a usually partially observable environment. Given the recent advances in single-agent reinforcement learning, multi-agent reinforcement learning (RL) has gained tremendous interest in recent years. Most research studies apply a fully centralized learning scheme to ease the transfer from the single-agent domain to multi-agent systems.Methods: In contrast, we claim that a decentralized learning scheme is preferable for applications in real-world scenarios as this allows deploying a learning algorithm on an individual robot rather than deploying the algorithm to a complete fleet of robots. Therefore, this article outlines a novel actor–critic (AC) approach tailored to cooperative MARL problems in sparsely rewarded domains. Our approach decouples the MARL problem into a set of distributed agents that model the other agents as responsive entities. In particular, we propose using two separate critics per agent to distinguish between the joint task reward and agent-based costs as commonly applied within multi-robot planning. On one hand, the agent-based critic intends to decrease agent-specific costs. On the other hand, each agent intends to optimize the joint team reward based on the joint task critic. As this critic still depends on the joint action of all agents, we outline two suitable behavior models based on Stackelberg games: a game against nature and a dyadic game against each agent. Following these behavior models, our algorithm allows fully decentralized execution and training.Results and Discussion: We evaluate our presented method using the proposed behavior models within a sparsely rewarded simulated multi-agent environment. Although our approach already outperforms the state-of-the-art learners, we conclude this article by outlining possible extensions of our algorithm that future research may build upon.
{"title":"Decentralized multi-agent reinforcement learning based on best-response policies","authors":"Volker Gabler, Dirk Wollherr","doi":"10.3389/frobt.2024.1229026","DOIUrl":"https://doi.org/10.3389/frobt.2024.1229026","url":null,"abstract":"Introduction: Multi-agent systems are an interdisciplinary research field that describes the concept of multiple decisive individuals interacting with a usually partially observable environment. Given the recent advances in single-agent reinforcement learning, multi-agent reinforcement learning (RL) has gained tremendous interest in recent years. Most research studies apply a fully centralized learning scheme to ease the transfer from the single-agent domain to multi-agent systems.Methods: In contrast, we claim that a decentralized learning scheme is preferable for applications in real-world scenarios as this allows deploying a learning algorithm on an individual robot rather than deploying the algorithm to a complete fleet of robots. Therefore, this article outlines a novel actor–critic (AC) approach tailored to cooperative MARL problems in sparsely rewarded domains. Our approach decouples the MARL problem into a set of distributed agents that model the other agents as responsive entities. In particular, we propose using two separate critics per agent to distinguish between the joint task reward and agent-based costs as commonly applied within multi-robot planning. On one hand, the agent-based critic intends to decrease agent-specific costs. On the other hand, each agent intends to optimize the joint team reward based on the joint task critic. As this critic still depends on the joint action of all agents, we outline two suitable behavior models based on Stackelberg games: a game against nature and a dyadic game against each agent. Following these behavior models, our algorithm allows fully decentralized execution and training.Results and Discussion: We evaluate our presented method using the proposed behavior models within a sparsely rewarded simulated multi-agent environment. Although our approach already outperforms the state-of-the-art learners, we conclude this article by outlining possible extensions of our algorithm that future research may build upon.","PeriodicalId":504612,"journal":{"name":"Frontiers in Robotics and AI","volume":"2 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140697700","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-16DOI: 10.3389/frobt.2024.1356827
H. Frijns, Matthias Hirschmanner, Barbara Sienkiewicz, Peter Hönig, B. Indurkhya, Markus Vincze
In human-robot collaboration, failures are bound to occur. A thorough understanding of potential errors is necessary so that robotic system designers can develop systems that remedy failure cases. In this work, we study failures that occur when participants interact with a working system and focus especially on errors in a robotic system’s knowledge base of which the system is not aware. A human interaction partner can be part of the error detection process if they are given insight into the robot’s knowledge and decision-making process. We investigate different communication modalities and the design of shared task representations in a joint human-robot object organization task. We conducted a user study (N = 31) in which the participants showed a Pepper robot how to organize objects, and the robot communicated the learned object configuration to the participants by means of speech, visualization, or a combination of speech and visualization. The multimodal, combined condition was preferred by 23 participants, followed by seven participants preferring the visualization. Based on the interviews, the errors that occurred, and the object configurations generated by the participants, we conclude that participants tend to test the system’s limitations by making the task more complex, which provokes errors. This trial-and-error behavior has a productive purpose and demonstrates that failures occur that arise from the combination of robot capabilities, the user’s understanding and actions, and interaction in the environment. Moreover, it demonstrates that failure can have a productive purpose in establishing better user mental models of the technology.
{"title":"Human-in-the-loop error detection in an object organization task with a social robot","authors":"H. Frijns, Matthias Hirschmanner, Barbara Sienkiewicz, Peter Hönig, B. Indurkhya, Markus Vincze","doi":"10.3389/frobt.2024.1356827","DOIUrl":"https://doi.org/10.3389/frobt.2024.1356827","url":null,"abstract":"In human-robot collaboration, failures are bound to occur. A thorough understanding of potential errors is necessary so that robotic system designers can develop systems that remedy failure cases. In this work, we study failures that occur when participants interact with a working system and focus especially on errors in a robotic system’s knowledge base of which the system is not aware. A human interaction partner can be part of the error detection process if they are given insight into the robot’s knowledge and decision-making process. We investigate different communication modalities and the design of shared task representations in a joint human-robot object organization task. We conducted a user study (N = 31) in which the participants showed a Pepper robot how to organize objects, and the robot communicated the learned object configuration to the participants by means of speech, visualization, or a combination of speech and visualization. The multimodal, combined condition was preferred by 23 participants, followed by seven participants preferring the visualization. Based on the interviews, the errors that occurred, and the object configurations generated by the participants, we conclude that participants tend to test the system’s limitations by making the task more complex, which provokes errors. This trial-and-error behavior has a productive purpose and demonstrates that failures occur that arise from the combination of robot capabilities, the user’s understanding and actions, and interaction in the environment. Moreover, it demonstrates that failure can have a productive purpose in establishing better user mental models of the technology.","PeriodicalId":504612,"journal":{"name":"Frontiers in Robotics and AI","volume":"43 s200","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140694618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}