Pub Date : 2024-09-06DOI: 10.1007/s12369-024-01171-9
Alireza Jafari, Yen-Chen Liu
Intelligent transportation modes such as autonomous robots and electric scooters with ride assistance are gaining popularity, but their integration into public spaces poses challenges to pedestrian safety and comfort. Nevertheless, the attempts to address the problem are scattered and sometimes contradictory. Models describing the behavior of heterogeneous crowds are necessary for solution evaluation before implementation. Moreover, autonomous agents benefit from these models, aiming to operate more efficiently while prioritizing pedestrian safety. The novelty of the proposed model is integrating time-to-collision, an indicator of road users’ subjective safety, into the social force model, the primary tool for pedestrian movement predictions. Moreover, the model considers the cumulative effects of anticipating other agents’ trajectories and the incurred time-to-collisions within a specific time horizon. We conduct controlled experiments using electric scooters to calibrate the model, discuss the distribution of parameter sets, and present pooled parameter population properties. Furthermore, we validate the model’s performance for electric scooters in complex scenarios and compare it with previous models using behavior naturalness metrics. Lastly, we compare the model’s accuracy and computation resource intensity to existing models. The model is computationally cheap and better equipped to estimate nearby people’s comfort level, making it a better candidate for intelligent agents’ path-planning algorithms in shared spaces.
{"title":"Time-to-Collision Based Social Force Model for Intelligent Agents on Shared Public Spaces","authors":"Alireza Jafari, Yen-Chen Liu","doi":"10.1007/s12369-024-01171-9","DOIUrl":"https://doi.org/10.1007/s12369-024-01171-9","url":null,"abstract":"<p>Intelligent transportation modes such as autonomous robots and electric scooters with ride assistance are gaining popularity, but their integration into public spaces poses challenges to pedestrian safety and comfort. Nevertheless, the attempts to address the problem are scattered and sometimes contradictory. Models describing the behavior of heterogeneous crowds are necessary for solution evaluation before implementation. Moreover, autonomous agents benefit from these models, aiming to operate more efficiently while prioritizing pedestrian safety. The novelty of the proposed model is integrating time-to-collision, an indicator of road users’ subjective safety, into the social force model, the primary tool for pedestrian movement predictions. Moreover, the model considers the cumulative effects of anticipating other agents’ trajectories and the incurred time-to-collisions within a specific time horizon. We conduct controlled experiments using electric scooters to calibrate the model, discuss the distribution of parameter sets, and present pooled parameter population properties. Furthermore, we validate the model’s performance for electric scooters in complex scenarios and compare it with previous models using behavior naturalness metrics. Lastly, we compare the model’s accuracy and computation resource intensity to existing models. The model is computationally cheap and better equipped to estimate nearby people’s comfort level, making it a better candidate for intelligent agents’ path-planning algorithms in shared spaces.</p>","PeriodicalId":14361,"journal":{"name":"International Journal of Social Robotics","volume":"59 1","pages":""},"PeriodicalIF":4.7,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142217544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-27DOI: 10.1007/s12369-024-01168-4
Kotaro Hayashi
Human–robot collaboration (HRC) is a natural progression of technological development and can improve job performance, address labor shortages, and reduce labor costs. However, it is still uncertain whether joint action, similar to that occurring between humans, can be replicated between humans and robots. Many robotic researchers have focused on joint action, and it has been demonstrated that gaze cueing plays a significant role in this context. Currently, previous studies on joint action use humanoids; however, robots utilized in the research on human-robot collaboration lack human-like eyes needed for verification. Therefore, this study focuses on the development of an eye robot with gaze-cueing behaviors that can be easily integrated into existing robotic systems. As another theme of this study, we proposed the use of fixation duration as a new metric, which is distinct from the commonly used response time, for the quantitative evaluation of joint action research. These are verified through a Go/No-go task under six conditions—three behavioral (i.e., joint action, joint attention-only, and alone), each with two partner conditions (robot or human partner). While developing a human-like eye robot, this study demonstrates the potential of a robot to be a better joint action partner than an uncertain human, with participants exhibiting the best reaction times when partnered with a robot. The shared action space of the participants was investigated, where a transference of the action space indicates the expression of joint action. The fixation duration indicates that the proposed robot cause participants to move their action space to include that of the robot. These results suggest that the proposed collaborative robot can initiate a joint action between a robot and a human, and can perform as a more effective partner in joint actions compared to an unfamiliar human. This study showcased the capacity of fixation duration as a quantitative assessment metric for joint action.
{"title":"Investigation of Joint Action in Go/No-Go Tasks: Development of a Human-Like Eye Robot and Verification of Action Space","authors":"Kotaro Hayashi","doi":"10.1007/s12369-024-01168-4","DOIUrl":"https://doi.org/10.1007/s12369-024-01168-4","url":null,"abstract":"<p>Human–robot collaboration (HRC) is a natural progression of technological development and can improve job performance, address labor shortages, and reduce labor costs. However, it is still uncertain whether joint action, similar to that occurring between humans, can be replicated between humans and robots. Many robotic researchers have focused on joint action, and it has been demonstrated that gaze cueing plays a significant role in this context. Currently, previous studies on joint action use humanoids; however, robots utilized in the research on human-robot collaboration lack human-like eyes needed for verification. Therefore, this study focuses on the development of an eye robot with gaze-cueing behaviors that can be easily integrated into existing robotic systems. As another theme of this study, we proposed the use of fixation duration as a new metric, which is distinct from the commonly used response time, for the quantitative evaluation of joint action research. These are verified through a Go/No-go task under six conditions—three behavioral (i.e., joint action, joint attention-only, and alone), each with two partner conditions (robot or human partner). While developing a human-like eye robot, this study demonstrates the potential of a robot to be a better joint action partner than an uncertain human, with participants exhibiting the best reaction times when partnered with a robot. The shared action space of the participants was investigated, where a transference of the action space indicates the expression of joint action. The fixation duration indicates that the proposed robot cause participants to move their action space to include that of the robot. These results suggest that the proposed collaborative robot can initiate a joint action between a robot and a human, and can perform as a more effective partner in joint actions compared to an unfamiliar human. This study showcased the capacity of fixation duration as a quantitative assessment metric for joint action.</p>","PeriodicalId":14361,"journal":{"name":"International Journal of Social Robotics","volume":"26 1","pages":""},"PeriodicalIF":4.7,"publicationDate":"2024-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142217545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-23DOI: 10.1007/s12369-024-01164-8
Pourya Aliasghari, Moojan Ghafurian, Chtystopher L. Nehaniv, Kerstin Dautenhahn
In real-world applications, robots should adapt to users and environments; however, users may not know how to teach new tasks to a robot. We studied whether participants without any experience in teaching a robot would become more proficient robot teachers through repeated kinesthetic human–robot teaching interactions. An experiment was conducted with twenty-eight participants who were asked to kinesthetically teach a humanoid robot different cleaning tasks in five repeated sessions, each session including four tasks. Throughout the sessions, participants’ gaze patterns, methods of manipulating the robot’s arm, their perceived workload, and some physical properties of the demonstrated actions were measured. Our data analyses revealed a diversity in non-experts’ human–robot teaching styles in repeated interactions. Three clusters of human teachers were identified based on participants’ performance in providing the demonstrations. The majority of participants significantly improved their success and speed of kinesthetic demonstrations by performing multiple rounds of teaching the robot. Overall, participants gazed less often at the robot’s hand and perceived less effort over repeated sessions. Our findings highlight how non-experts adapt to robot teaching by being exposed repeatedly to human–robot teaching tasks, without any formal training or external intervention, and we identify the characteristics of successful and improving human teachers.
{"title":"How Non-experts Kinesthetically Teach a Robot over Multiple Sessions: Diversity in Teaching Styles and Effects on Performance","authors":"Pourya Aliasghari, Moojan Ghafurian, Chtystopher L. Nehaniv, Kerstin Dautenhahn","doi":"10.1007/s12369-024-01164-8","DOIUrl":"https://doi.org/10.1007/s12369-024-01164-8","url":null,"abstract":"<p>In real-world applications, robots should adapt to users and environments; however, users may not know how to teach new tasks to a robot. We studied whether participants without any experience in teaching a robot would become more proficient robot teachers through repeated kinesthetic human–robot teaching interactions. An experiment was conducted with twenty-eight participants who were asked to kinesthetically teach a humanoid robot different cleaning tasks in five repeated sessions, each session including four tasks. Throughout the sessions, participants’ gaze patterns, methods of manipulating the robot’s arm, their perceived workload, and some physical properties of the demonstrated actions were measured. Our data analyses revealed a diversity in non-experts’ human–robot teaching styles in repeated interactions. Three clusters of human teachers were identified based on participants’ performance in providing the demonstrations. The majority of participants significantly improved their success and speed of kinesthetic demonstrations by performing multiple rounds of teaching the robot. Overall, participants gazed less often at the robot’s hand and perceived less effort over repeated sessions. Our findings highlight how non-experts adapt to robot teaching by being exposed repeatedly to human–robot teaching tasks, without any formal training or external intervention, and we identify the characteristics of successful and improving human teachers.\u0000</p>","PeriodicalId":14361,"journal":{"name":"International Journal of Social Robotics","volume":"19 1","pages":""},"PeriodicalIF":4.7,"publicationDate":"2024-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142217548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-14DOI: 10.1007/s12369-024-01121-5
Irina Rudenko, Andrey Rudenko, Achim J. Lilienthal, Kai O. Arras, Barbara Bruno
Social robots, owing to their embodied physical presence in human spaces and the ability to directly interact with the users and their environment, have a great potential to support children in various activities in education, healthcare and daily life. Child–Robot Interaction (CRI), as any domain involving children, inevitably faces the major challenge of designing generalized strategies to work with unique, turbulent and very diverse individuals. Addressing this challenging endeavor requires to combine the standpoint of the robot-centered perspective, i.e. what robots technically can and are best positioned to do, with that of the child-centered perspective, i.e. what children may gain from the robot and how the robot should act to best support them in reaching the goals of the interaction. This article aims to help researchers bridge the two perspectives and proposes to address the development of CRI scenarios with insights from child psychology and child development theories. To that end, we review the outcomes of the CRI studies, outline common trends and challenges, and identify two key factors from child psychology that impact child-robot interactions, especially in a long-term perspective: developmental stage and individual characteristics. For both of them we discuss prospective experiment designs which support building naturally engaging and sustainable interactions.
{"title":"The Child Factor in Child–Robot Interaction: Discovering the Impact of Developmental Stage and Individual Characteristics","authors":"Irina Rudenko, Andrey Rudenko, Achim J. Lilienthal, Kai O. Arras, Barbara Bruno","doi":"10.1007/s12369-024-01121-5","DOIUrl":"https://doi.org/10.1007/s12369-024-01121-5","url":null,"abstract":"<p>Social robots, owing to their embodied physical presence in human spaces and the ability to directly interact with the users and their environment, have a great potential to support children in various activities in education, healthcare and daily life. Child–Robot Interaction (CRI), as any domain involving children, inevitably faces the major challenge of designing generalized strategies to work with unique, turbulent and very diverse individuals. Addressing this challenging endeavor requires to combine the standpoint of the robot-centered perspective, i.e. what robots technically can and are best positioned to do, with that of the child-centered perspective, i.e. what children may gain from the robot and how the robot should act to best support them in reaching the goals of the interaction. This article aims to help researchers bridge the two perspectives and proposes to address the development of CRI scenarios with insights from child psychology and child development theories. To that end, we review the outcomes of the CRI studies, outline common trends and challenges, and identify two key factors from child psychology that impact child-robot interactions, especially in a long-term perspective: developmental stage and individual characteristics. For both of them we discuss prospective experiment designs which support building naturally engaging and sustainable interactions.</p>","PeriodicalId":14361,"journal":{"name":"International Journal of Social Robotics","volume":"58 1","pages":""},"PeriodicalIF":4.7,"publicationDate":"2024-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142217546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-13DOI: 10.1007/s12369-024-01153-x
Celia Nieto Agraz, Pascal Hinrichs, Marco Eichelberg, Andreas Hein
The number of robots that are in use worldwide is increasing, and they are starting to be used in new areas, where a use of robotics was impossible in the past, such as nursing care. This brings about new challenges that need to be addressed, one of them is the challenge of privacy preservation. Privacy in robotics is still a very new field that has not been studied deeply yet, even though some studies show that it is a crucial factor. In this article, we investigate how users feel about their privacy when interacting in a telepresence scenario with three different technical means: a laptop computer with a built-in camera, the mobile robot Temi and the humanoid robot Ameca. Behaviors from human interaction were implemented for the humanoid robot, which are not aimed directly at deactivating the sensors, but symbolize this deactivation. We conducted a user study with 21 participants. We did not find out any statistical significant difference between the elements, which shows that the robotic solutions are also popular and people feel comfortable around them. In addition, we also found out that the best way for a humanoid robot to indicate privacy to the participants is to perform actions where it closes the eyes and gives a sense of deactivation. Lastly, the results show that even though the acceptance of a humanoid robot is quite good, further work is needed to increase the control feeling in order to increase the trust of the user over it.
{"title":"Is the Robot Spying on me? A Study on Perceived Privacy in Telepresence Scenarios in a Care Setting with Mobile and Humanoid Robots","authors":"Celia Nieto Agraz, Pascal Hinrichs, Marco Eichelberg, Andreas Hein","doi":"10.1007/s12369-024-01153-x","DOIUrl":"https://doi.org/10.1007/s12369-024-01153-x","url":null,"abstract":"<p>The number of robots that are in use worldwide is increasing, and they are starting to be used in new areas, where a use of robotics was impossible in the past, such as nursing care. This brings about new challenges that need to be addressed, one of them is the challenge of privacy preservation. Privacy in robotics is still a very new field that has not been studied deeply yet, even though some studies show that it is a crucial factor. In this article, we investigate how users feel about their privacy when interacting in a telepresence scenario with three different technical means: a laptop computer with a built-in camera, the mobile robot Temi and the humanoid robot Ameca. Behaviors from human interaction were implemented for the humanoid robot, which are not aimed directly at deactivating the sensors, but symbolize this deactivation. We conducted a user study with 21 participants. We did not find out any statistical significant difference between the elements, which shows that the robotic solutions are also popular and people feel comfortable around them. In addition, we also found out that the best way for a humanoid robot to indicate privacy to the participants is to perform actions where it closes the eyes and gives a sense of deactivation. Lastly, the results show that even though the acceptance of a humanoid robot is quite good, further work is needed to increase the control feeling in order to increase the trust of the user over it.\u0000</p>","PeriodicalId":14361,"journal":{"name":"International Journal of Social Robotics","volume":"58 1","pages":""},"PeriodicalIF":4.7,"publicationDate":"2024-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142217547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-05DOI: 10.1007/s12369-024-01163-9
Shushi Namba, Wataru Sato, Saori Namba, Alexander Diel, Carlos Ishi, Takashi Minato
The “thinking face” is a facial signal used to convey being in thought. For androids, the thinking face may be important to achieve natural human–robot interaction. However, the facial pattern necessary for portraying the thinking face remains unclear and has not yet been investigated in androids. The current study aims to (a) identify the facial patterns when people are engaged in answering complex questions (i.e., thinking face) and (b) clarify whether implementing the observed thinking faces in an android can facilitate natural human–robot interaction. In Study 1, we analyze the facial movements of 40 participants after they are prompted with difficult questions and indicate five facial patterns that corresponded to thinking faces. In Study 2, we further focus on the pattern of furrowing of the brows and narrowing of the eyes among the observed thinking facial patterns and implement this pattern in an android. The results show that thinking faces enhance the perception of being in thought, genuineness, human-likeness, and appropriateness in androids while decreasing eeriness. The free-description data also revealed that negative emotions are attributed to the thinking face. In Study 3, we compared the thinking vs. neutral faces in a question–answer situation. The results showed that the android's thinking face facilitated the perception of being in thought and human-likeness. These findings suggest that the thinking face of androids can facilitate natural human–robot interaction.
{"title":"How an Android Expresses “Now Loading…”: Examining the Properties of Thinking Faces","authors":"Shushi Namba, Wataru Sato, Saori Namba, Alexander Diel, Carlos Ishi, Takashi Minato","doi":"10.1007/s12369-024-01163-9","DOIUrl":"https://doi.org/10.1007/s12369-024-01163-9","url":null,"abstract":"<p>The “thinking face” is a facial signal used to convey being in thought. For androids, the thinking face may be important to achieve natural human–robot interaction. However, the facial pattern necessary for portraying the thinking face remains unclear and has not yet been investigated in androids. The current study aims to (a) identify the facial patterns when people are engaged in answering complex questions (i.e., thinking face) and (b) clarify whether implementing the observed thinking faces in an android can facilitate natural human–robot interaction. In Study 1, we analyze the facial movements of 40 participants after they are prompted with difficult questions and indicate five facial patterns that corresponded to thinking faces. In Study 2, we further focus on the pattern of furrowing of the brows and narrowing of the eyes among the observed thinking facial patterns and implement this pattern in an android. The results show that thinking faces enhance the perception of being in thought, genuineness, human-likeness, and appropriateness in androids while decreasing eeriness. The free-description data also revealed that negative emotions are attributed to the thinking face. In Study 3, we compared the thinking vs. neutral faces in a question–answer situation. The results showed that the android's thinking face facilitated the perception of being in thought and human-likeness. These findings suggest that the thinking face of androids can facilitate natural human–robot interaction.</p>","PeriodicalId":14361,"journal":{"name":"International Journal of Social Robotics","volume":"22 1","pages":""},"PeriodicalIF":4.7,"publicationDate":"2024-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141931386","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-24DOI: 10.1007/s12369-024-01160-y
Eshtiak Ahmed, Oğuz ‘Oz’ Buruk, Juho Hamari
The field of robotics has grown exponentially over the years, especially the social aspect, which has enabled robots to interact with humans meaningfully. Robots are now used in many domains, such as manufacturing, healthcare, education, entertainment, rehabilitation, etc. Along with their widespread usage in many real-life environments, robots have been used as companions to humans. With the increased amount of research done on human–robot companionship (HRC), it is important to understand how this domain is developing, in which direction, and what the future might hold. There is also a need to understand the influencing factors and what kind of empirical results are in the literature. To address these questions, we conducted a systematic literature review and analyzed a final number of 134 relevant articles. The findings suggest that anthropomorphic and zoomorphic robots are more popular as human companions, while there is a lack of interest in functional and caricatured robots. Also, human-like and animal-like features are implemented more in companion robots. Studies rarely exploit the mobility available in these robots in companionship scenarios, especially in outdoor settings. In addition to that, co-existence and co-performance-based implementation with humans have been observed rarely. Based on the results, we propose a future research agenda that includes thematic, theoretical, methodological, and technological agendas. This study will help us understand the current state and usage of robotic companions which will then potentially aid in determining how HRC can be leveraged and integrated more seamlessly into human lives for better effectiveness.
{"title":"Human–Robot Companionship: Current Trends and Future Agenda","authors":"Eshtiak Ahmed, Oğuz ‘Oz’ Buruk, Juho Hamari","doi":"10.1007/s12369-024-01160-y","DOIUrl":"https://doi.org/10.1007/s12369-024-01160-y","url":null,"abstract":"<p>The field of robotics has grown exponentially over the years, especially the social aspect, which has enabled robots to interact with humans meaningfully. Robots are now used in many domains, such as manufacturing, healthcare, education, entertainment, rehabilitation, etc. Along with their widespread usage in many real-life environments, robots have been used as companions to humans. With the increased amount of research done on human–robot companionship (HRC), it is important to understand how this domain is developing, in which direction, and what the future might hold. There is also a need to understand the influencing factors and what kind of empirical results are in the literature. To address these questions, we conducted a systematic literature review and analyzed a final number of 134 relevant articles. The findings suggest that anthropomorphic and zoomorphic robots are more popular as human companions, while there is a lack of interest in functional and caricatured robots. Also, human-like and animal-like features are implemented more in companion robots. Studies rarely exploit the mobility available in these robots in companionship scenarios, especially in outdoor settings. In addition to that, co-existence and co-performance-based implementation with humans have been observed rarely. Based on the results, we propose a future research agenda that includes thematic, theoretical, methodological, and technological agendas. This study will help us understand the current state and usage of robotic companions which will then potentially aid in determining how HRC can be leveraged and integrated more seamlessly into human lives for better effectiveness.</p>","PeriodicalId":14361,"journal":{"name":"International Journal of Social Robotics","volume":"66 1","pages":""},"PeriodicalIF":4.7,"publicationDate":"2024-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141785884","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-13DOI: 10.1007/s12369-024-01140-2
Javier Laplaza, Francesc Moreno, Alberto Sanfeliu
Predicting human motion based on a sequence of past observations is crucial for various applications in robotics and computer vision. Currently, this problem is typically addressed by training deep learning models using some of the most well-known 3D human motion datasets widely used in the community. However, these datasets generally do not consider how humans behave and move when a robot is nearby, leading to a data distribution different from the real distribution of motion that robots will encounter when collaborating with humans. Additionally, incorporating contextual information related to the interactive task between the human and the robot, as well as information on the human willingness to collaborate with the robot, can improve not only the accuracy of the predicted sequence but also serve as a useful tool for robots to navigate through collaborative tasks successfully. In this research, we propose a deep learning architecture that predicts both 3D human body motion and human intention for collaborative tasks. The model employs a multi-head attention mechanism, taking into account human motion and task context as inputs. The resulting outputs include the predicted motion of the human body and the inferred human intention. We have validated this architecture in two different tasks: collaborative object handover and collaborative grape harvesting. While the architecture remains the same for both tasks, the inputs differ. In the handover task, the architecture considers human motion, robot end effector, and obstacle positions as inputs. Additionally, the model can be conditioned on the desired intention to tailor the output motion accordingly. To assess the performance of the collaborative handover task, we conducted a user study to evaluate human perception of the robot’s sociability, naturalness, security, and comfort. This evaluation was conducted by comparing the robot’s behavior when it utilized the prediction in its planner versus when it did not. Furthermore, we also applied the model to a collaborative grape harvesting task. By integrating human motion prediction and human intention inference, our architecture shows promising results in enhancing the capabilities of robots in collaborative scenarios. The model’s flexibility allows it to handle various tasks with different inputs, making it adaptable to real-world applications.
{"title":"Enhancing Robotic Collaborative Tasks Through Contextual Human Motion Prediction and Intention Inference","authors":"Javier Laplaza, Francesc Moreno, Alberto Sanfeliu","doi":"10.1007/s12369-024-01140-2","DOIUrl":"https://doi.org/10.1007/s12369-024-01140-2","url":null,"abstract":"<p>Predicting human motion based on a sequence of past observations is crucial for various applications in robotics and computer vision. Currently, this problem is typically addressed by training deep learning models using some of the most well-known 3D human motion datasets widely used in the community. However, these datasets generally do not consider how humans behave and move when a robot is nearby, leading to a data distribution different from the real distribution of motion that robots will encounter when collaborating with humans. Additionally, incorporating contextual information related to the interactive task between the human and the robot, as well as information on the human willingness to collaborate with the robot, can improve not only the accuracy of the predicted sequence but also serve as a useful tool for robots to navigate through collaborative tasks successfully. In this research, we propose a deep learning architecture that predicts both 3D human body motion and human intention for collaborative tasks. The model employs a multi-head attention mechanism, taking into account human motion and task context as inputs. The resulting outputs include the predicted motion of the human body and the inferred human intention. We have validated this architecture in two different tasks: collaborative object handover and collaborative grape harvesting. While the architecture remains the same for both tasks, the inputs differ. In the handover task, the architecture considers human motion, robot end effector, and obstacle positions as inputs. Additionally, the model can be conditioned on the desired intention to tailor the output motion accordingly. To assess the performance of the collaborative handover task, we conducted a user study to evaluate human perception of the robot’s sociability, naturalness, security, and comfort. This evaluation was conducted by comparing the robot’s behavior when it utilized the prediction in its planner versus when it did not. Furthermore, we also applied the model to a collaborative grape harvesting task. By integrating human motion prediction and human intention inference, our architecture shows promising results in enhancing the capabilities of robots in collaborative scenarios. The model’s flexibility allows it to handle various tasks with different inputs, making it adaptable to real-world applications.\u0000</p>","PeriodicalId":14361,"journal":{"name":"International Journal of Social Robotics","volume":"25 1","pages":""},"PeriodicalIF":4.7,"publicationDate":"2024-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141613281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-12DOI: 10.1007/s12369-024-01158-6
Matteo Luperto, Marta Romeo, Francesca Lunardini, Javier Monroy, Daniel Hernández García, Carlo Abbate, Angelo Cangelosi, Simona Ferrante, Javier Gonzalez-Jimenez, Nicola Basilico, N. Alberto Borghese
The early detection of mild cognitive impairment, a condition of increasing impact in our aging society, is a challenging task with no established answer. One promising solution is the deployment of robotic systems and ambient assisted living technology in the houses of older adults for monitoring and assistance. In this work, we address and discuss a qualitative analysis on the feasibility and acceptability of a socially assistive robot (SAR) deployed in prospective users’ houses to monitor their cognitive capabilities through a set of digitalised neuropsychological tests and spot questions conveniently integrated within the robotic assistant’s daily tasks. We do this by describing an experimental campaign where a robotic system, integrated with a larger framework, was installed in the house of 10 users for a duration of at least 10 weeks, during which their cognitive capabilities were monitored by the robot. Concretely, the robots supervised the users during the completion of the tests and transparently monitored them by asking questions interleaved in their everyday activities. Results show a general acceptance of such technology, being able to carry out the intended tasks without being too invasive, paving the way for an impactful at-home use of SARs.
{"title":"Exploring the Viability of Socially Assistive Robots for At-Home Cognitive Monitoring: Potential and Limitations","authors":"Matteo Luperto, Marta Romeo, Francesca Lunardini, Javier Monroy, Daniel Hernández García, Carlo Abbate, Angelo Cangelosi, Simona Ferrante, Javier Gonzalez-Jimenez, Nicola Basilico, N. Alberto Borghese","doi":"10.1007/s12369-024-01158-6","DOIUrl":"https://doi.org/10.1007/s12369-024-01158-6","url":null,"abstract":"<p>The early detection of mild cognitive impairment, a condition of increasing impact in our aging society, is a challenging task with no established answer. One promising solution is the deployment of robotic systems and ambient assisted living technology in the houses of older adults for monitoring and assistance. In this work, we address and discuss a qualitative analysis on the feasibility and acceptability of a socially assistive robot (SAR) deployed in prospective users’ houses to monitor their cognitive capabilities through a set of digitalised neuropsychological tests and spot questions conveniently integrated within the robotic assistant’s daily tasks. We do this by describing an experimental campaign where a robotic system, integrated with a larger framework, was installed in the house of 10 users for a duration of at least 10 weeks, during which their cognitive capabilities were monitored by the robot. Concretely, the robots supervised the users during the completion of the tests and transparently monitored them by asking questions interleaved in their everyday activities. Results show a general acceptance of such technology, being able to carry out the intended tasks without being too invasive, paving the way for an impactful at-home use of SARs.</p>","PeriodicalId":14361,"journal":{"name":"International Journal of Social Robotics","volume":"10874 1","pages":""},"PeriodicalIF":4.7,"publicationDate":"2024-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141613224","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-08DOI: 10.1007/s12369-024-01157-7
Miyoung Cho, Dohyung Kim, Minsu Jang, Jaeyeon Lee, Jaehong Kim, Woo-han Yun, Youngwoo Yoon, Jinhyeok Jang, Chankyu Park, Woo-Ri Ko, Jaeyoon Jang, Ho-Sub Yoon, Daeha Lee, Choulsoo Jang
The increase in elderly population is emerging as a serious social issue. The coronavirus pandemic has increased the number of elderly people suffering from depression and loneliness owing to the lack of face-to-face activities. In this study, we developed an integrated system for the human-care robot service, considering cognitive and emotional support for elderly people, and verified its stability and usefulness in the real world. We recruited 40 elderly people for an apartment testbed environment experiment and two elderly people living alone for a long time participated in the experiment at their homes. Quantitative experimental results were analyzed by comparing service success rates and user satisfaction in two different test environments to verify the stability of the service. Qualitative evaluations were also conducted through surveys and interviews to assess the usefulness of the service.
{"title":"Evaluating Human-Care Robot Services for the Elderly: An Experimental Study","authors":"Miyoung Cho, Dohyung Kim, Minsu Jang, Jaeyeon Lee, Jaehong Kim, Woo-han Yun, Youngwoo Yoon, Jinhyeok Jang, Chankyu Park, Woo-Ri Ko, Jaeyoon Jang, Ho-Sub Yoon, Daeha Lee, Choulsoo Jang","doi":"10.1007/s12369-024-01157-7","DOIUrl":"https://doi.org/10.1007/s12369-024-01157-7","url":null,"abstract":"<p>The increase in elderly population is emerging as a serious social issue. The coronavirus pandemic has increased the number of elderly people suffering from depression and loneliness owing to the lack of face-to-face activities. In this study, we developed an integrated system for the human-care robot service, considering cognitive and emotional support for elderly people, and verified its stability and usefulness in the real world. We recruited 40 elderly people for an apartment testbed environment experiment and two elderly people living alone for a long time participated in the experiment at their homes. Quantitative experimental results were analyzed by comparing service success rates and user satisfaction in two different test environments to verify the stability of the service. Qualitative evaluations were also conducted through surveys and interviews to assess the usefulness of the service.</p>","PeriodicalId":14361,"journal":{"name":"International Journal of Social Robotics","volume":"26 1","pages":""},"PeriodicalIF":4.7,"publicationDate":"2024-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141571351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}