Pub Date : 2024-03-18DOI: 10.3389/frobt.2024.1288818
Guanyu Huang, Roger K. Moore
Along with the development of speech and language technologies, the market for speech-enabled human-robot interactions (HRI) has grown in recent years. However, it is found that people feel their conversational interactions with such robots are far from satisfactory. One of the reasons is the habitability gap, where the usability of a speech-enabled agent drops when its flexibility increases. For social robots, such flexibility is reflected in the diverse choice of robots’ appearances, sounds and behaviours, which shape a robot’s ‘affordance’. Whilst designers or users have enjoyed the freedom of constructing a social robot by integrating off-the-shelf technologies, such freedom comes at a potential cost: the users’ perceptions and satisfaction. Designing appropriate affordances is essential for the quality of HRI. It is hypothesised that a social robot with aligned affordances could create an appropriate perception of the robot and increase users’ satisfaction when speaking with it. Given that previous studies of affordance alignment mainly focus on one interface’s characteristics and face-voice match, we aim to deepen our understanding of affordance alignment with a robot’s behaviours and use cases. In particular, we investigate how a robot’s affordances affect users’ perceptions in different types of use cases. For this purpose, we conducted an exploratory experiment that included three different affordance settings (adult-like, child-like, and robot-like) and three use cases (informative, emotional, and hybrid). Participants were invited to talk to social robots in person. A mixed-methods approach was employed for quantitative and qualitative analysis of 156 interaction samples. The results show that static affordance (face and voice) has a statistically significant effect on the perceived warmth of the first impression; use cases affect people’s perceptions more on perceived competence and warmth before and after interactions. In addition, it shows the importance of aligning static affordance with behavioural affordance. General design principles of behavioural affordances are proposed. We anticipate that our empirical evidence will provide a clearer guideline for speech-enabled social robots’ affordance design. It will be a starting point for more sophisticated design guidelines. For example, personalised affordance design for individual or group users in different contexts.
{"title":"Freedom comes at a cost?: An exploratory study on affordances’ impact on users’ perception of a social robot","authors":"Guanyu Huang, Roger K. Moore","doi":"10.3389/frobt.2024.1288818","DOIUrl":"https://doi.org/10.3389/frobt.2024.1288818","url":null,"abstract":"Along with the development of speech and language technologies, the market for speech-enabled human-robot interactions (HRI) has grown in recent years. However, it is found that people feel their conversational interactions with such robots are far from satisfactory. One of the reasons is the habitability gap, where the usability of a speech-enabled agent drops when its flexibility increases. For social robots, such flexibility is reflected in the diverse choice of robots’ appearances, sounds and behaviours, which shape a robot’s ‘affordance’. Whilst designers or users have enjoyed the freedom of constructing a social robot by integrating off-the-shelf technologies, such freedom comes at a potential cost: the users’ perceptions and satisfaction. Designing appropriate affordances is essential for the quality of HRI. It is hypothesised that a social robot with aligned affordances could create an appropriate perception of the robot and increase users’ satisfaction when speaking with it. Given that previous studies of affordance alignment mainly focus on one interface’s characteristics and face-voice match, we aim to deepen our understanding of affordance alignment with a robot’s behaviours and use cases. In particular, we investigate how a robot’s affordances affect users’ perceptions in different types of use cases. For this purpose, we conducted an exploratory experiment that included three different affordance settings (adult-like, child-like, and robot-like) and three use cases (informative, emotional, and hybrid). Participants were invited to talk to social robots in person. A mixed-methods approach was employed for quantitative and qualitative analysis of 156 interaction samples. The results show that static affordance (face and voice) has a statistically significant effect on the perceived warmth of the first impression; use cases affect people’s perceptions more on perceived competence and warmth before and after interactions. In addition, it shows the importance of aligning static affordance with behavioural affordance. General design principles of behavioural affordances are proposed. We anticipate that our empirical evidence will provide a clearer guideline for speech-enabled social robots’ affordance design. It will be a starting point for more sophisticated design guidelines. For example, personalised affordance design for individual or group users in different contexts.","PeriodicalId":504612,"journal":{"name":"Frontiers in Robotics and AI","volume":"342 6","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140232637","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-15DOI: 10.3389/frobt.2024.1362044
Waka Saeki, Yoshiyuki Ueda
Robots have tremendous potential, and have recently been introduced not only for simple operations in factories, but also in workplaces where customer service communication is required. However, communication robots have not always been accepted. This study proposes a three-stage (first contact, interaction, and decision) model for robot acceptance based on the human cognitive process flow to design preferred robots and clarifies the elements of the robot and the processes that affect robot acceptance decision-making. Unlike previous robot acceptance models, the current model focuses on a sequential account of how people decide to accept, considering the interaction (or carry-over) effect between impressions established at each stage. According to the model, this study conducted a scenario-based experiment focusing on the impression of the first contact (a robot’s appearance) and that formed during the interaction with robot (politeness of its conversation and behavior) on robot acceptance in both successful and slightly failed situations. The better the appearance of the robot and the more polite its behavior, the greater the acceptance rate. Importantly, there was no interaction between these two factors. The results indicating that the impressions of the first contact and interaction are additively processed suggest that we should accumulate findings that improving the appearance of the robot and making its communication behavior more human-like in politeness will lead to a more acceptable robot design.
{"title":"Sequential model based on human cognitive processing to robot acceptance","authors":"Waka Saeki, Yoshiyuki Ueda","doi":"10.3389/frobt.2024.1362044","DOIUrl":"https://doi.org/10.3389/frobt.2024.1362044","url":null,"abstract":"Robots have tremendous potential, and have recently been introduced not only for simple operations in factories, but also in workplaces where customer service communication is required. However, communication robots have not always been accepted. This study proposes a three-stage (first contact, interaction, and decision) model for robot acceptance based on the human cognitive process flow to design preferred robots and clarifies the elements of the robot and the processes that affect robot acceptance decision-making. Unlike previous robot acceptance models, the current model focuses on a sequential account of how people decide to accept, considering the interaction (or carry-over) effect between impressions established at each stage. According to the model, this study conducted a scenario-based experiment focusing on the impression of the first contact (a robot’s appearance) and that formed during the interaction with robot (politeness of its conversation and behavior) on robot acceptance in both successful and slightly failed situations. The better the appearance of the robot and the more polite its behavior, the greater the acceptance rate. Importantly, there was no interaction between these two factors. The results indicating that the impressions of the first contact and interaction are additively processed suggest that we should accumulate findings that improving the appearance of the robot and making its communication behavior more human-like in politeness will lead to a more acceptable robot design.","PeriodicalId":504612,"journal":{"name":"Frontiers in Robotics and AI","volume":"44 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140238924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-13DOI: 10.3389/frobt.2024.1214043
Andy Wong, Mehran Taghian Jazi, Tomoharu Takeuchi, Johannes Günther, Osmar Zaïane
One of the greatest challenges to the automated production of goods is equipment malfunction. Ideally, machines should be able to automatically predict and detect operational faults in order to minimize downtime and plan for timely maintenance. While traditional condition-based maintenance (CBM) involves costly sensor additions and engineering, machine learning approaches offer the potential to learn from already existing sensors. Implementations of data-driven CBM typically use supervised and semi-supervised learning to classify faults. In addition to a large collection of operation data, records of faulty operation are also necessary, which are often costly to obtain. Instead of classifying faults, we use an approach to detect abnormal behaviour within the machine’s operation. This approach is analogous to semi-supervised anomaly detection in machine learning (ML), with important distinctions in experimental design and evaluation specific to the problem of industrial fault detection. We present a novel method of machine fault detection using temporal-difference learning and General Value Functions (GVFs). Using GVFs, we form a predictive model of sensor data to detect faulty behaviour. As sensor data from machines is not i.i.d. but closer to Markovian sampling, temporal-difference learning methods should be well suited for this data. We compare our GVF outlier detection (GVFOD) algorithm to a broad selection of multivariate and temporal outlier detection methods, using datasets collected from a tabletop robot emulating the movement of an industrial actuator. We find that not only does GVFOD achieve the same recall score as other multivariate OD algorithms, it attains significantly higher precision. Furthermore, GVFOD has intuitive hyperparameters which can be selected based upon expert knowledge of the application. Together, these findings allow for a more reliable detection of abnormal machine behaviour to allow ideal timing of maintenance; saving resources, time and cost.
{"title":"General value functions for fault detection in multivariate time series data","authors":"Andy Wong, Mehran Taghian Jazi, Tomoharu Takeuchi, Johannes Günther, Osmar Zaïane","doi":"10.3389/frobt.2024.1214043","DOIUrl":"https://doi.org/10.3389/frobt.2024.1214043","url":null,"abstract":"One of the greatest challenges to the automated production of goods is equipment malfunction. Ideally, machines should be able to automatically predict and detect operational faults in order to minimize downtime and plan for timely maintenance. While traditional condition-based maintenance (CBM) involves costly sensor additions and engineering, machine learning approaches offer the potential to learn from already existing sensors. Implementations of data-driven CBM typically use supervised and semi-supervised learning to classify faults. In addition to a large collection of operation data, records of faulty operation are also necessary, which are often costly to obtain. Instead of classifying faults, we use an approach to detect abnormal behaviour within the machine’s operation. This approach is analogous to semi-supervised anomaly detection in machine learning (ML), with important distinctions in experimental design and evaluation specific to the problem of industrial fault detection. We present a novel method of machine fault detection using temporal-difference learning and General Value Functions (GVFs). Using GVFs, we form a predictive model of sensor data to detect faulty behaviour. As sensor data from machines is not i.i.d. but closer to Markovian sampling, temporal-difference learning methods should be well suited for this data. We compare our GVF outlier detection (GVFOD) algorithm to a broad selection of multivariate and temporal outlier detection methods, using datasets collected from a tabletop robot emulating the movement of an industrial actuator. We find that not only does GVFOD achieve the same recall score as other multivariate OD algorithms, it attains significantly higher precision. Furthermore, GVFOD has intuitive hyperparameters which can be selected based upon expert knowledge of the application. Together, these findings allow for a more reliable detection of abnormal machine behaviour to allow ideal timing of maintenance; saving resources, time and cost.","PeriodicalId":504612,"journal":{"name":"Frontiers in Robotics and AI","volume":"2016 21","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140246199","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-13DOI: 10.3389/frobt.2024.1172105
Michael A. Sebok, Herbert G. Tanner
Heterogeneous multi-agent systems can be deployed to complete a variety of tasks, including some that are impossible using a single generic modality. This paper introduces an approach to solving the problem of cooperative behavior planning in small heterogeneous robot teams where members can both function independently as well as physically interact with each other in ways that give rise to additional functionality. This approach enables, for the first time, the cooperative completion of tasks that are infeasible when using any single modality from those agents comprising the team.
{"title":"Cooperative planning for physically interacting heterogeneous robots","authors":"Michael A. Sebok, Herbert G. Tanner","doi":"10.3389/frobt.2024.1172105","DOIUrl":"https://doi.org/10.3389/frobt.2024.1172105","url":null,"abstract":"Heterogeneous multi-agent systems can be deployed to complete a variety of tasks, including some that are impossible using a single generic modality. This paper introduces an approach to solving the problem of cooperative behavior planning in small heterogeneous robot teams where members can both function independently as well as physically interact with each other in ways that give rise to additional functionality. This approach enables, for the first time, the cooperative completion of tasks that are infeasible when using any single modality from those agents comprising the team.","PeriodicalId":504612,"journal":{"name":"Frontiers in Robotics and AI","volume":"9 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140247437","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-13DOI: 10.3389/frobt.2024.1225297
Troy Cordie, Jonathan Roberts, Matthew Dunbabin, Ross Dungavell, Tirthankar Bandyopadhyay
Actuator failure on a remotely deployed robot results in decreased efficiency or even renders it inoperable. Robustness to these failures will become critical as robots are required to be more independent and operate out of the range of repair. To address these challenges, we present two approaches based on modular robotic architecture to improve robustness to actuator failure of both fixed-configuration robots and modular reconfigurable robots. Our work uses modular reconfigurable robots capable of modifying their style of locomotion and changing their designed morphology through ejecting modules. This framework improved the distance travelled and decreased the effort to move through the environment of simulated and physical robots. When the deployed robot was allowed to change its locomotion style, it showed improved robustness to actuator failure when compared to a robot with a fixed controller. Furthermore, a robot capable of changing its locomotion and design morphology statistically outlasted both tests with a fixed morphology. Testing was carried out using a gazebo simulation and validated in multiple tests in the field. We show for the first time that ejecting modular failed components can improve the overall mission length.
{"title":"Enabling robustness to failure with modular field robots","authors":"Troy Cordie, Jonathan Roberts, Matthew Dunbabin, Ross Dungavell, Tirthankar Bandyopadhyay","doi":"10.3389/frobt.2024.1225297","DOIUrl":"https://doi.org/10.3389/frobt.2024.1225297","url":null,"abstract":"Actuator failure on a remotely deployed robot results in decreased efficiency or even renders it inoperable. Robustness to these failures will become critical as robots are required to be more independent and operate out of the range of repair. To address these challenges, we present two approaches based on modular robotic architecture to improve robustness to actuator failure of both fixed-configuration robots and modular reconfigurable robots. Our work uses modular reconfigurable robots capable of modifying their style of locomotion and changing their designed morphology through ejecting modules. This framework improved the distance travelled and decreased the effort to move through the environment of simulated and physical robots. When the deployed robot was allowed to change its locomotion style, it showed improved robustness to actuator failure when compared to a robot with a fixed controller. Furthermore, a robot capable of changing its locomotion and design morphology statistically outlasted both tests with a fixed morphology. Testing was carried out using a gazebo simulation and validated in multiple tests in the field. We show for the first time that ejecting modular failed components can improve the overall mission length.","PeriodicalId":504612,"journal":{"name":"Frontiers in Robotics and AI","volume":"16 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140247940","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-13DOI: 10.3389/frobt.2024.1335733
Chiara Basla, Giulia Mariani, Peter Wolf, R. Riener, H. V. van Hedel
Introduction: Children and adolescents with neurological impairments face reduced participation and independence in daily life activities due to walking difficulties. Existing assistive devices often offer insufficient support, potentially leading to wheelchair dependence and limiting physical activity and daily life engagement. Mobile wearable robots, such as exoskeletons and exosuits, have shown promise in supporting adults during activities of daily living but are underexplored for children.Methods: We conducted a cross-sectional study to examine the potential of a cable-driven exosuit, the Myosuit, to enhance walking efficiency in adolescents with diverse ambulatory impairments. Each participant walked a course including up-hill, down-hill, level ground walking, and stairs ascending and descending, with and without the exosuit’s assistance. We monitored the time and step count to complete the course and the average heart rate and muscle activity. Additionally, we assessed the adolescents’ perspective on the exosuit’s utility using a visual analog scale.Results: Six adolescents completed the study. Although not statistically significant, five participants completed the course with the exosuit’s assistance in reduced time (time reduction range: [-3.87, 17.42]%, p-value: 0.08, effect size: 0.88). The number of steps taken decreased significantly with the Myosuit’s assistance (steps reduction range: [1.07, 15.71]%, p-value: 0.04, effect size: 0.90). Heart rate and muscle activity did not differ between Myosuit-assisted and unassisted conditions (p-value: 0.96 and 0.35, effect size: 0.02 and 0.42, respectively). Participants generally perceived reduced effort and increased safety with the Myosuit’s assistance, especially during tasks involving concentric contractions (e.g., walking uphill). Three participants expressed a willingness to use the Myosuit in daily life, while the others found it heavy or too conspicuous.Discussion: Increased walking speed without increasing physical effort when performing activities of daily living could lead to higher levels of participation and increased functional independence. Despite perceiving the benefits introduced by the exosuit’s assistance, adolescents reported the need for further modification of the device design before using it extensively at home and in the community.
{"title":"Enhancing walking efficiency of adolescents with neurological impairments using an exosuit for ambulatory activities of daily living","authors":"Chiara Basla, Giulia Mariani, Peter Wolf, R. Riener, H. V. van Hedel","doi":"10.3389/frobt.2024.1335733","DOIUrl":"https://doi.org/10.3389/frobt.2024.1335733","url":null,"abstract":"Introduction: Children and adolescents with neurological impairments face reduced participation and independence in daily life activities due to walking difficulties. Existing assistive devices often offer insufficient support, potentially leading to wheelchair dependence and limiting physical activity and daily life engagement. Mobile wearable robots, such as exoskeletons and exosuits, have shown promise in supporting adults during activities of daily living but are underexplored for children.Methods: We conducted a cross-sectional study to examine the potential of a cable-driven exosuit, the Myosuit, to enhance walking efficiency in adolescents with diverse ambulatory impairments. Each participant walked a course including up-hill, down-hill, level ground walking, and stairs ascending and descending, with and without the exosuit’s assistance. We monitored the time and step count to complete the course and the average heart rate and muscle activity. Additionally, we assessed the adolescents’ perspective on the exosuit’s utility using a visual analog scale.Results: Six adolescents completed the study. Although not statistically significant, five participants completed the course with the exosuit’s assistance in reduced time (time reduction range: [-3.87, 17.42]%, p-value: 0.08, effect size: 0.88). The number of steps taken decreased significantly with the Myosuit’s assistance (steps reduction range: [1.07, 15.71]%, p-value: 0.04, effect size: 0.90). Heart rate and muscle activity did not differ between Myosuit-assisted and unassisted conditions (p-value: 0.96 and 0.35, effect size: 0.02 and 0.42, respectively). Participants generally perceived reduced effort and increased safety with the Myosuit’s assistance, especially during tasks involving concentric contractions (e.g., walking uphill). Three participants expressed a willingness to use the Myosuit in daily life, while the others found it heavy or too conspicuous.Discussion: Increased walking speed without increasing physical effort when performing activities of daily living could lead to higher levels of participation and increased functional independence. Despite perceiving the benefits introduced by the exosuit’s assistance, adolescents reported the need for further modification of the device design before using it extensively at home and in the community.","PeriodicalId":504612,"journal":{"name":"Frontiers in Robotics and AI","volume":"30 5","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140247840","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-12DOI: 10.3389/frobt.2024.1389191
Q. Abu Al-haija
{"title":"Editorial: Artificial intelligence solutions for decision making in robotics","authors":"Q. Abu Al-haija","doi":"10.3389/frobt.2024.1389191","DOIUrl":"https://doi.org/10.3389/frobt.2024.1389191","url":null,"abstract":"","PeriodicalId":504612,"journal":{"name":"Frontiers in Robotics and AI","volume":"137 37","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140251465","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-12DOI: 10.3389/frobt.2024.1325296
Rithwik Udayagiri, Jessica Yin, Xinyao Cai, William Townsend, Varun Trivedi, Rohan Shende, O. F. Sowande, Laura Prosser, James H. Pikul, Michelle J. Johnson
Introduction: It is crucial to identify neurodevelopmental disorders in infants early on for timely intervention to improve their long-term outcomes. Combining natural play with quantitative measurements of developmental milestones can be an effective way to swiftly and efficiently detect infants who are at risk of neurodevelopmental delays. Clinical studies have established differences in toy interaction behaviors between full-term infants and pre-term infants who are at risk for cerebral palsy and other developmental disorders.Methods: The proposed toy aims to improve the quantitative assessment of infant-toy interactions and fully automate the process of detecting those infants at risk of developing motor delays. This paper describes the design and development of a toy that uniquely utilizes a collection of soft lossy force sensors which are developed using optical fibers to gather play interaction data from infants laying supine in a gym. An example interaction database was created by having 15 adults complete a total of 2480 interactions with the toy consisting of 620 touches, 620 punches—“kick substitute,” 620 weak grasps and 620 strong grasps.Results: The data is analyzed for patterns of interaction with the toy face using a machine learning model developed to classify the four interactions present in the database. Results indicate that the configuration of 6 soft force sensors on the face created unique activation patterns.Discussion: The machine learning algorithm was able to identify the distinct action types from the data, suggesting the potential usability of the toy. Next steps involve sensorizing the entire toy and testing with infants.
{"title":"Towards an AI-driven soft toy for automatically detecting and classifying infant-toy interactions using optical force sensors","authors":"Rithwik Udayagiri, Jessica Yin, Xinyao Cai, William Townsend, Varun Trivedi, Rohan Shende, O. F. Sowande, Laura Prosser, James H. Pikul, Michelle J. Johnson","doi":"10.3389/frobt.2024.1325296","DOIUrl":"https://doi.org/10.3389/frobt.2024.1325296","url":null,"abstract":"Introduction: It is crucial to identify neurodevelopmental disorders in infants early on for timely intervention to improve their long-term outcomes. Combining natural play with quantitative measurements of developmental milestones can be an effective way to swiftly and efficiently detect infants who are at risk of neurodevelopmental delays. Clinical studies have established differences in toy interaction behaviors between full-term infants and pre-term infants who are at risk for cerebral palsy and other developmental disorders.Methods: The proposed toy aims to improve the quantitative assessment of infant-toy interactions and fully automate the process of detecting those infants at risk of developing motor delays. This paper describes the design and development of a toy that uniquely utilizes a collection of soft lossy force sensors which are developed using optical fibers to gather play interaction data from infants laying supine in a gym. An example interaction database was created by having 15 adults complete a total of 2480 interactions with the toy consisting of 620 touches, 620 punches—“kick substitute,” 620 weak grasps and 620 strong grasps.Results: The data is analyzed for patterns of interaction with the toy face using a machine learning model developed to classify the four interactions present in the database. Results indicate that the configuration of 6 soft force sensors on the face created unique activation patterns.Discussion: The machine learning algorithm was able to identify the distinct action types from the data, suggesting the potential usability of the toy. Next steps involve sensorizing the entire toy and testing with infants.","PeriodicalId":504612,"journal":{"name":"Frontiers in Robotics and AI","volume":"5 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140248804","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-12DOI: 10.3389/frobt.2024.1336612
David Mansfield, Allahyar Montazeri
The environmental pollution caused by various sources has escalated the climate crisis making the need to establish reliable, intelligent, and persistent environmental monitoring solutions more crucial than ever. Mobile sensing systems are a popular platform due to their cost-effectiveness and adaptability. However, in practice, operation environments demand highly intelligent and robust systems that can cope with an environment’s changing dynamics. To achieve this reinforcement learning has become a popular tool as it facilitates the training of intelligent and robust sensing agents that can handle unknown and extreme conditions. In this paper, a framework that formulates active sensing as a reinforcement learning problem is proposed. This framework allows unification with multiple essential environmental monitoring tasks and algorithms such as coverage, patrolling, source seeking, exploration and search and rescue. The unified framework represents a step towards bridging the divide between theoretical advancements in reinforcement learning and real-world applications in environmental monitoring. A critical review of the literature in this field is carried out and it is found that despite the potential of reinforcement learning for environmental active sensing applications there is still a lack of practical implementation and most work remains in the simulation phase. It is also noted that despite the consensus that, multi-agent systems are crucial to fully realize the potential of active sensing there is a lack of research in this area.
{"title":"A survey on autonomous environmental monitoring approaches: towards unifying active sensing and reinforcement learning","authors":"David Mansfield, Allahyar Montazeri","doi":"10.3389/frobt.2024.1336612","DOIUrl":"https://doi.org/10.3389/frobt.2024.1336612","url":null,"abstract":"The environmental pollution caused by various sources has escalated the climate crisis making the need to establish reliable, intelligent, and persistent environmental monitoring solutions more crucial than ever. Mobile sensing systems are a popular platform due to their cost-effectiveness and adaptability. However, in practice, operation environments demand highly intelligent and robust systems that can cope with an environment’s changing dynamics. To achieve this reinforcement learning has become a popular tool as it facilitates the training of intelligent and robust sensing agents that can handle unknown and extreme conditions. In this paper, a framework that formulates active sensing as a reinforcement learning problem is proposed. This framework allows unification with multiple essential environmental monitoring tasks and algorithms such as coverage, patrolling, source seeking, exploration and search and rescue. The unified framework represents a step towards bridging the divide between theoretical advancements in reinforcement learning and real-world applications in environmental monitoring. A critical review of the literature in this field is carried out and it is found that despite the potential of reinforcement learning for environmental active sensing applications there is still a lack of practical implementation and most work remains in the simulation phase. It is also noted that despite the consensus that, multi-agent systems are crucial to fully realize the potential of active sensing there is a lack of research in this area.","PeriodicalId":504612,"journal":{"name":"Frontiers in Robotics and AI","volume":"2 6","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140249223","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-27DOI: 10.3389/frobt.2024.1335147
G. A. Albanese, Anna Bucchieri, J. Podda, A. Tacchino, S. Buccelli, Elena De Momi, M. Laffranchi, Kailynn Mannella, Michael W. R. Holmes, J. Zenzeri, L. D. De Michieli, G. Brichetto, G. Barresi
The robotics discipline is exploring precise and versatile solutions for upper-limb rehabilitation in Multiple Sclerosis (MS). People with MS can greatly benefit from robotic systems to help combat the complexities of this disease, which can impair the ability to perform activities of daily living (ADLs). In order to present the potential and the limitations of smart mechatronic devices in the mentioned clinical domain, this review is structured to propose a concise SWOT (Strengths, Weaknesses, Opportunities, and Threats) Analysis of robotic rehabilitation in MS. Through the SWOT Analysis, a method mostly adopted in business management, this paper addresses both internal and external factors that can promote or hinder the adoption of upper-limb rehabilitation robots in MS. Subsequently, it discusses how the synergy with another category of interaction technologies - the systems underlying virtual and augmented environments - may empower Strengths, overcome Weaknesses, expand Opportunities, and handle Threats in rehabilitation robotics for MS. The impactful adaptability of these digital settings (extensively used in rehabilitation for MS, even to approach ADL-like tasks in safe simulated contexts) is the main reason for presenting this approach to face the critical issues of the aforementioned SWOT Analysis. This methodological proposal aims at paving the way for devising further synergistic strategies based on the integration of medical robotic devices with other promising technologies to help upper-limb functional recovery in MS.
{"title":"Robotic systems for upper-limb rehabilitation in multiple sclerosis: a SWOT analysis and the synergies with virtual and augmented environments","authors":"G. A. Albanese, Anna Bucchieri, J. Podda, A. Tacchino, S. Buccelli, Elena De Momi, M. Laffranchi, Kailynn Mannella, Michael W. R. Holmes, J. Zenzeri, L. D. De Michieli, G. Brichetto, G. Barresi","doi":"10.3389/frobt.2024.1335147","DOIUrl":"https://doi.org/10.3389/frobt.2024.1335147","url":null,"abstract":"The robotics discipline is exploring precise and versatile solutions for upper-limb rehabilitation in Multiple Sclerosis (MS). People with MS can greatly benefit from robotic systems to help combat the complexities of this disease, which can impair the ability to perform activities of daily living (ADLs). In order to present the potential and the limitations of smart mechatronic devices in the mentioned clinical domain, this review is structured to propose a concise SWOT (Strengths, Weaknesses, Opportunities, and Threats) Analysis of robotic rehabilitation in MS. Through the SWOT Analysis, a method mostly adopted in business management, this paper addresses both internal and external factors that can promote or hinder the adoption of upper-limb rehabilitation robots in MS. Subsequently, it discusses how the synergy with another category of interaction technologies - the systems underlying virtual and augmented environments - may empower Strengths, overcome Weaknesses, expand Opportunities, and handle Threats in rehabilitation robotics for MS. The impactful adaptability of these digital settings (extensively used in rehabilitation for MS, even to approach ADL-like tasks in safe simulated contexts) is the main reason for presenting this approach to face the critical issues of the aforementioned SWOT Analysis. This methodological proposal aims at paving the way for devising further synergistic strategies based on the integration of medical robotic devices with other promising technologies to help upper-limb functional recovery in MS.","PeriodicalId":504612,"journal":{"name":"Frontiers in Robotics and AI","volume":"22 S1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140426616","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}