Pub Date : 2024-04-24DOI: 10.1007/s12369-024-01131-3
Alberto Bacchin, Gloria Beraldo, Jun Miura, Emanuele Menegatti
{"title":"Preference-Based People-Aware Navigation for Telepresence Robots","authors":"Alberto Bacchin, Gloria Beraldo, Jun Miura, Emanuele Menegatti","doi":"10.1007/s12369-024-01131-3","DOIUrl":"https://doi.org/10.1007/s12369-024-01131-3","url":null,"abstract":"","PeriodicalId":14361,"journal":{"name":"International Journal of Social Robotics","volume":null,"pages":null},"PeriodicalIF":4.7,"publicationDate":"2024-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140663762","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-22DOI: 10.1007/s12369-024-01128-y
Hugo Simão, David Gonçalves, Ana C. Pires, Lúcia Abreu, Alexandre Bernardino, Jodi Forlizzi, Tiago Guerreiro
Communication among some older adults is affected by cognitive and mobility impairments. This increases isolation, particularly for those residing in care homes, and leads to accelerated cognitive decline. Previous research has leveraged assistive robots to promote recreational routines and communication among older adults, with the robot leading the interaction. However, older adults could have more agency in the interaction, as robots could extend elders’ intentions and needs. Therefore, we explored an approach whereby the robot’s agency is shifted to the older adults who lead the interaction by commanding a robot’s actions using interactive physical blocks (tangible blocks). We conducted sessions with 22 care home dwellers where they could exchange messages and objects using the robot. Based on older adults’ observed behaviors during the sessions and perspectives gathered from interviews with geriatric professionals, we reflect on the opportunities and challenges for increased user agency and the asymmetries that emerged from differing abilities and personality traits. Our qualitative results highlight the potential of robotic approaches to extend the agency and communication of older adults, anchored on human values, such as the exchange of affection, collaboration, and competition.
{"title":"“I Want to Send a Message to My Friend”: Exploring the Shift of Agency to Older Adults in HRI","authors":"Hugo Simão, David Gonçalves, Ana C. Pires, Lúcia Abreu, Alexandre Bernardino, Jodi Forlizzi, Tiago Guerreiro","doi":"10.1007/s12369-024-01128-y","DOIUrl":"https://doi.org/10.1007/s12369-024-01128-y","url":null,"abstract":"<p>Communication among some older adults is affected by cognitive and mobility impairments. This increases isolation, particularly for those residing in care homes, and leads to accelerated cognitive decline. Previous research has leveraged assistive robots to promote recreational routines and communication among older adults, with the robot leading the interaction. However, older adults could have more agency in the interaction, as robots could extend elders’ intentions and needs. Therefore, we explored an approach whereby the robot’s agency is shifted to the older adults who lead the interaction by commanding a robot’s actions using interactive physical blocks (tangible blocks). We conducted sessions with 22 care home dwellers where they could exchange messages and objects using the robot. Based on older adults’ observed behaviors during the sessions and perspectives gathered from interviews with geriatric professionals, we reflect on the opportunities and challenges for increased user agency and the asymmetries that emerged from differing abilities and personality traits. Our qualitative results highlight the potential of robotic approaches to extend the agency and communication of older adults, anchored on human values, such as the exchange of affection, collaboration, and competition.</p>","PeriodicalId":14361,"journal":{"name":"International Journal of Social Robotics","volume":null,"pages":null},"PeriodicalIF":4.7,"publicationDate":"2024-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140635382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-16DOI: 10.1007/s12369-024-01129-x
Mohammad Babamiri, Rashid Heidarimoghadam, Fakhradin Ghasemi, Leili Tapak, Alireza Mortezapour
Examining personality traits can enhance the likelihood of a successful interaction between humans and robots in forthcoming work settings. Employing the emic/etic approach stands out as a crucial method for investigating personality types in the context of future environments. Currently, no study has explored the impact of this approach on individuals’ willingness to engage with a robot. In the present study, our aim is to determine whether emic characteristics can influence the connection between etic traits and the willingness to use a robot. In the current study, 367 male workers participated. All data were collected using valid and reliable questionnaires. The Five-Factor model of personality was regarded as etic personality characteristics, while the moderating roles of technology affinity and STARA were assessed as emic personality characteristics. The analytical process followed the method presented by Hayes et al. for analyzing moderators. Technology affinity, as a primary emic factor, exerts a moderating influence on the association between neuroticism, openness, agreeableness, conscientiousness, and the willingness to use robots. Conversely, STARA serves as a mediator exclusively in the relationship with neuroticism among workers. Notably, extroversion does not exhibit mediation with any of the emic factors. Both emic and etic personality characteristics were recognized as significant facilitators of the inclination to use robots. In addition to technology affinity and STARA, it is advisable to explore new emic traits and their interactive effects with etic personality characteristics.
在即将到来的工作环境中,研究个性特征可以提高人类与机器人成功互动的可能性。在未来环境中,采用情绪/情感方法是研究人格类型的重要方法。目前,还没有研究探讨过这种方法对个人与机器人互动意愿的影响。在本研究中,我们的目的是确定情感特征是否会影响行为特征与使用机器人意愿之间的联系。本研究共有 367 名男性工人参与。所有数据均通过有效、可靠的问卷收集。人格五因素模型被视为等位人格特征,而技术亲和力和 STARA 的调节作用则被评估为显性人格特征。分析过程遵循 Hayes 等人提出的调节因素分析方法。技术亲和力作为一个主要的情绪因素,对神经质、开放性、合意性、自觉性与使用机器人意愿之间的关联产生了调节作用。相反,STARA 在工人的神经质关系中只起中介作用。值得注意的是,外向性与任何情绪因素都不存在中介关系。情感型和行为型人格特征都被认为是使用机器人倾向的重要促进因素。除了技术亲和力和 STARA 之外,我们还应该探索新的情感特征及其与行为个性特征之间的互动效应。
{"title":"Personality Traits and Willingness to Use a Robot: Extending Emic/Etic Personality Concept","authors":"Mohammad Babamiri, Rashid Heidarimoghadam, Fakhradin Ghasemi, Leili Tapak, Alireza Mortezapour","doi":"10.1007/s12369-024-01129-x","DOIUrl":"https://doi.org/10.1007/s12369-024-01129-x","url":null,"abstract":"<p>Examining personality traits can enhance the likelihood of a successful interaction between humans and robots in forthcoming work settings. Employing the emic/etic approach stands out as a crucial method for investigating personality types in the context of future environments. Currently, no study has explored the impact of this approach on individuals’ willingness to engage with a robot. In the present study, our aim is to determine whether emic characteristics can influence the connection between etic traits and the willingness to use a robot. In the current study, 367 male workers participated. All data were collected using valid and reliable questionnaires. The Five-Factor model of personality was regarded as etic personality characteristics, while the moderating roles of technology affinity and STARA were assessed as emic personality characteristics. The analytical process followed the method presented by Hayes et al. for analyzing moderators. Technology affinity, as a primary emic factor, exerts a moderating influence on the association between neuroticism, openness, agreeableness, conscientiousness, and the willingness to use robots. Conversely, STARA serves as a mediator exclusively in the relationship with neuroticism among workers. Notably, extroversion does not exhibit mediation with any of the emic factors. Both emic and etic personality characteristics were recognized as significant facilitators of the inclination to use robots. In addition to technology affinity and STARA, it is advisable to explore new emic traits and their interactive effects with etic personality characteristics.</p>","PeriodicalId":14361,"journal":{"name":"International Journal of Social Robotics","volume":null,"pages":null},"PeriodicalIF":4.7,"publicationDate":"2024-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140584956","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This study explores how much current mainstream Robot-Assisted Language Learning (RALL) systems produce outcomes compared to human tutors instructing a typical English conversation lesson. To this end, an experiment was conducted with 26 participants divided in RALL (14 participants) and human tutor (12 participants) groups. All participants took a pre-test on the first day, followed by 30 min of study per day for 7 days, and 3 post-tests on the last day. The test results indicated that the RALL group considerably improved lexical/grammatical error rates and fluency of speech compared to that for the human tutor group. The other characteristics, such as rhythm, pronunciation, complexity, and task achievement of speech did not indicate any differences between the groups. The results suggested that exercises with the RALL system enabled participants to commit the learned expressions to memory, whereas those with human tutors emphasized on communication with the participants. This study demonstrated the benefits of using RALL systems that can work well in lessons that human tutors find hard to teach.
{"title":"Comparison of Outcomes Between Robot-Assisted Language Learning System and Human Tutors: Focusing on Speaking Ability","authors":"Takamasa Iio, Yuichiro Yoshikawa, Kohei Ogawa, Hiroshi Ishiguro","doi":"10.1007/s12369-024-01134-0","DOIUrl":"https://doi.org/10.1007/s12369-024-01134-0","url":null,"abstract":"<p>This study explores how much current mainstream Robot-Assisted Language Learning (RALL) systems produce outcomes compared to human tutors instructing a typical English conversation lesson. To this end, an experiment was conducted with 26 participants divided in RALL (14 participants) and human tutor (12 participants) groups. All participants took a pre-test on the first day, followed by 30 min of study per day for 7 days, and 3 post-tests on the last day. The test results indicated that the RALL group considerably improved lexical/grammatical error rates and fluency of speech compared to that for the human tutor group. The other characteristics, such as rhythm, pronunciation, complexity, and task achievement of speech did not indicate any differences between the groups. The results suggested that exercises with the RALL system enabled participants to commit the learned expressions to memory, whereas those with human tutors emphasized on communication with the participants. This study demonstrated the benefits of using RALL systems that can work well in lessons that human tutors find hard to teach.</p>","PeriodicalId":14361,"journal":{"name":"International Journal of Social Robotics","volume":null,"pages":null},"PeriodicalIF":4.7,"publicationDate":"2024-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140584950","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In recent years, applications of social robots as the operator’s avatar have been widely studied for remote conversation with rich nonverbal information. Having another side-participant robot beside the avatar robot of the operator was found to be effective for providing long-lasting backchannels to the interlocutor. The side-participant robot is also expected to play a role in assisting human participation in multiparty conversations. However, such a focus has not been applied to remote conversations with multiple robots. Here, we propose a multiple-robot telecommunication system with which the operator can use a side-participant robot to assist conversation that is developed by the operator through the main speaker robot to verify its effectiveness. In the laboratory experiment where the subjects were made to feel stressed by being forced to provide rude questions to the interlocutor, the proposed system was shown to reduce guilt and to improve the overall mood of operators. The result encourages the application of a multi robot remote conversation system to allow the user to participate in remote conversations with less anxiety of potential failure in maintaining the conversation.
{"title":"Having Different Dialog Roles in Telecommunication by Using Two Teleoperated Robots Reduces an Operator’s Guilt","authors":"Reina Nozawa, Kazuki Sakai, Megumi Kawata, Hiroshi Ishiguro, Yuichiro Yoshikawa","doi":"10.1007/s12369-024-01125-1","DOIUrl":"https://doi.org/10.1007/s12369-024-01125-1","url":null,"abstract":"<p>In recent years, applications of social robots as the operator’s avatar have been widely studied for remote conversation with rich nonverbal information. Having another side-participant robot beside the avatar robot of the operator was found to be effective for providing long-lasting backchannels to the interlocutor. The side-participant robot is also expected to play a role in assisting human participation in multiparty conversations. However, such a focus has not been applied to remote conversations with multiple robots. Here, we propose a multiple-robot telecommunication system with which the operator can use a side-participant robot to assist conversation that is developed by the operator through the main speaker robot to verify its effectiveness. In the laboratory experiment where the subjects were made to feel stressed by being forced to provide rude questions to the interlocutor, the proposed system was shown to reduce guilt and to improve the overall mood of operators. The result encourages the application of a multi robot remote conversation system to allow the user to participate in remote conversations with less anxiety of potential failure in maintaining the conversation.</p>","PeriodicalId":14361,"journal":{"name":"International Journal of Social Robotics","volume":null,"pages":null},"PeriodicalIF":4.7,"publicationDate":"2024-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140585037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-10DOI: 10.1007/s12369-024-01133-1
Cobe Deane Wilson, Danielle Langlois, Marlena R. Fraune
As robots become more common, people interact with them individually, with strangers, and with friends. For example, when coming across a robot in a mall, a family might ask it for instructions. An individual person might hesitate to interact with the robot until they see another person interacting, and then explore the robot together. Although human–robot interaction (HRI) research has recently uncovered the importance of examining differences in group behavior toward robots versus individuals’ behavior, thus far, most HRI research has not distinguished behavior based on group type (e.g., stranger, companion). In this online lab-based study, we explore how individuals, strangers, and companions collaborate with robot teammates. We test competing hypotheses: (1) More cohesive companion groups will form a human subgroup and exclude the robots more than strangers or individuals, vs. (2) More cohesive companion groups will provide social support to interact better with the novel robotic technology than strangers or individuals. In this cooperative context in which participants were required to interact with the robot, results supported H1: the subgroup hypothesis. Based on these findings, people deploying robots should note that if people are required to interact with the robots, the interactions may not go as smoothly for companion groups compared to stranger groups or individuals.
{"title":"Strangers on a Team?: Human Companions, Compared to Strangers or Individuals, are More Likely to Reject a Robot Teammate","authors":"Cobe Deane Wilson, Danielle Langlois, Marlena R. Fraune","doi":"10.1007/s12369-024-01133-1","DOIUrl":"https://doi.org/10.1007/s12369-024-01133-1","url":null,"abstract":"<p>As robots become more common, people interact with them individually, with strangers, and with friends. For example, when coming across a robot in a mall, a family might ask it for instructions. An individual person might hesitate to interact with the robot until they see another person interacting, and then explore the robot together. Although human–robot interaction (HRI) research has recently uncovered the importance of examining differences in group behavior toward robots versus individuals’ behavior, thus far, most HRI research has not distinguished behavior based on group type (e.g., stranger, companion). In this online lab-based study, we explore how individuals, strangers, and companions collaborate with robot teammates. We test competing hypotheses: (1) More cohesive companion groups will form a <i>human subgroup</i> and exclude the robots more than strangers or individuals, vs. (2) More cohesive companion groups will provide <i>social support</i> to interact better with the novel robotic technology than strangers or individuals. In this cooperative context in which participants were required to interact with the robot, results supported H1: the subgroup hypothesis. Based on these findings, people deploying robots should note that if people are required to interact with the robots, the interactions may not go as smoothly for companion groups compared to stranger groups or individuals.</p>","PeriodicalId":14361,"journal":{"name":"International Journal of Social Robotics","volume":null,"pages":null},"PeriodicalIF":4.7,"publicationDate":"2024-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140585102","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-09DOI: 10.1007/s12369-024-01132-2
Bingcheng Wang, Tianyi Yuan, Pei-Luen Patrick Rau
This study examined the effects of explanation strategy (global explanation vs. deductive explanation vs. contrastive explanation) and autonomy level (high vs. low) of explainable agents on human–AI collaborative decision-making. A 3 × 2 mixed-design experiment was conducted. The decision-making task was a modified Mahjong game. Forty-eight participants were divided into three groups, each collaborating with an agent with a different explanation strategy. Each agent had two autonomy levels. The results indicated that global explanation incurred the lowest mental workload and highest understandability. Contrastive explanation required the highest mental workload but incurred the highest perceived competence, affect-based trust, and social presence. Deductive explanation was found to be the worst in terms of social presence. The high-autonomy agents incurred lower mental workload and interaction fluency but higher faith and social presence than the low-autonomy agents. The findings of this study can help practitioners in designing user-centered explainable decision-support agents and choosing appropriate explanation strategies for different situations.
本研究考察了解释策略(全局解释 vs. 演绎解释 vs. 对比解释)和可解释代理的自主水平(高与低)对人类-人工智能协同决策的影响。实验采用 3 × 2 混合设计。决策任务是一个改良的麻将游戏。48 名参与者被分为三组,每组与一个具有不同解释策略的代理合作。每个代理都有两个自主级别。结果表明,全局解释的心理工作量最小,可理解性最高。对比式解释所需的心理工作量最大,但产生的感知能力、基于情感的信任和社会存在感也最高。演绎法解释的社会存在感最差。与低自主性代理人相比,高自主性代理人的脑力劳动负荷和互动流畅性较低,但产生的信任和社会存在感较高。本研究的结果有助于从业人员设计以用户为中心的可解释决策支持代理,并针对不同情况选择适当的解释策略。
{"title":"Effects of Explanation Strategy and Autonomy of Explainable AI on Human–AI Collaborative Decision-making","authors":"Bingcheng Wang, Tianyi Yuan, Pei-Luen Patrick Rau","doi":"10.1007/s12369-024-01132-2","DOIUrl":"https://doi.org/10.1007/s12369-024-01132-2","url":null,"abstract":"<p>This study examined the effects of explanation strategy (global explanation vs. deductive explanation vs. contrastive explanation) and autonomy level (high vs. low) of explainable agents on human–AI collaborative decision-making. A 3 × 2 mixed-design experiment was conducted. The decision-making task was a modified Mahjong game. Forty-eight participants were divided into three groups, each collaborating with an agent with a different explanation strategy. Each agent had two autonomy levels. The results indicated that global explanation incurred the lowest mental workload and highest understandability. Contrastive explanation required the highest mental workload but incurred the highest perceived competence, affect-based trust, and social presence. Deductive explanation was found to be the worst in terms of social presence. The high-autonomy agents incurred lower mental workload and interaction fluency but higher faith and social presence than the low-autonomy agents. The findings of this study can help practitioners in designing user-centered explainable decision-support agents and choosing appropriate explanation strategies for different situations.</p>","PeriodicalId":14361,"journal":{"name":"International Journal of Social Robotics","volume":null,"pages":null},"PeriodicalIF":4.7,"publicationDate":"2024-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140585032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-09DOI: 10.1007/s12369-024-01124-2
Marcos Maroto-Gómez, María Malfaz, José Carlos Castillo, Álvaro Castro-González, Miguel Ángel Salichs
Robots in multi-user environments require adaptation to produce personalized interactions. In these scenarios, the user’s feedback leads the robots to learn from experiences and use this knowledge to generate adapted activities to the user’s preferences. However, preferences are user-specific and may suffer variations, so learning is required to personalize the robot’s actions to each user. Robots can obtain feedback in Human–Robot Interaction by asking users their opinion about the activity (explicit feedback) or estimating it from the interaction (implicit feedback). This paper presents a Reinforcement Learning framework for social robots to personalize activity selection using the preferences and feedback obtained from the users. This paper also studies the role of user feedback in learning, and it asks whether combining explicit and implicit user feedback produces better robot adaptive behavior than considering them separately. We evaluated the system with 24 participants in a long-term experiment where they were divided into three conditions: (i) adapting the activity selection using the explicit feedback that was obtained from asking the user how much they liked the activities; (ii) using the implicit feedback obtained from interaction metrics of each activity generated from the user’s actions; and (iii) combining explicit and implicit feedback. As we hypothesized, the results show that combining both feedback produces better adaptive values when correlating initial and final activity scores, overcoming the use of individual explicit and implicit feedback. We also found that the kind of user feedback does not affect the user’s engagement or the number of activities carried out during the experiment.
{"title":"Personalizing Activity Selection in Assistive Social Robots from Explicit and Implicit User Feedback","authors":"Marcos Maroto-Gómez, María Malfaz, José Carlos Castillo, Álvaro Castro-González, Miguel Ángel Salichs","doi":"10.1007/s12369-024-01124-2","DOIUrl":"https://doi.org/10.1007/s12369-024-01124-2","url":null,"abstract":"<p>Robots in multi-user environments require adaptation to produce personalized interactions. In these scenarios, the user’s feedback leads the robots to learn from experiences and use this knowledge to generate adapted activities to the user’s preferences. However, preferences are user-specific and may suffer variations, so learning is required to personalize the robot’s actions to each user. Robots can obtain feedback in Human–Robot Interaction by asking users their opinion about the activity (explicit feedback) or estimating it from the interaction (implicit feedback). This paper presents a Reinforcement Learning framework for social robots to personalize activity selection using the preferences and feedback obtained from the users. This paper also studies the role of user feedback in learning, and it asks whether combining explicit and implicit user feedback produces better robot adaptive behavior than considering them separately. We evaluated the system with 24 participants in a long-term experiment where they were divided into three conditions: (i) adapting the activity selection using the explicit feedback that was obtained from asking the user how much they liked the activities; (ii) using the implicit feedback obtained from interaction metrics of each activity generated from the user’s actions; and (iii) combining explicit and implicit feedback. As we hypothesized, the results show that combining both feedback produces better adaptive values when correlating initial and final activity scores, overcoming the use of individual explicit and implicit feedback. We also found that the kind of user feedback does not affect the user’s engagement or the number of activities carried out during the experiment.</p>","PeriodicalId":14361,"journal":{"name":"International Journal of Social Robotics","volume":null,"pages":null},"PeriodicalIF":4.7,"publicationDate":"2024-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140585074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-08DOI: 10.1007/s12369-024-01117-1
Eleanor Watson, Thiago Viana, Shujun Zhang
Annotation tools serve a critical role in the generation of datasets that fuel machine learning applications. With the advent of Foundation Models, particularly those based on Transformer architectures and expansive language models, the capacity for training on comprehensive, multimodal datasets has been substantially enhanced. This not only facilitates robust generalization across diverse data categories and knowledge domains but also necessitates a novel form of annotation—prompt engineering—for qualitative model fine-tuning. This advancement creates new avenues for machine intelligence to more precisely identify, forecast, and replicate human behavior, addressing historical limitations that contribute to algorithmic inequities. Nevertheless, the voluminous and intricate nature of the data essential for training multimodal models poses significant engineering challenges, particularly with regard to bias. No consensus has yet emerged on optimal procedures for conducting this annotation work in a manner that is ethically responsible, secure, and efficient. This historical literature review traces advancements in these technologies from 2018 onward, underscores significant contributions, and identifies existing knowledge gaps and avenues for future research pertinent to the development of Transformer-based multimodal Foundation Models. An initial survey of over 724 articles yielded 156 studies that met the criteria for historical analysis; these were further narrowed down to 46 key papers spanning the years 2018–2022. The review offers valuable perspectives on the evolution of best practices, pinpoints current knowledge deficiencies, and suggests potential directions for future research. The paper includes six figures and delves into the transformation of research landscapes in the realm of machine-assisted behavioral annotation, focusing on critical issues such as bias.
{"title":"Machine Learning Driven Developments in Behavioral Annotation: A Recent Historical Review","authors":"Eleanor Watson, Thiago Viana, Shujun Zhang","doi":"10.1007/s12369-024-01117-1","DOIUrl":"https://doi.org/10.1007/s12369-024-01117-1","url":null,"abstract":"<p>Annotation tools serve a critical role in the generation of datasets that fuel machine learning applications. With the advent of Foundation Models, particularly those based on Transformer architectures and expansive language models, the capacity for training on comprehensive, multimodal datasets has been substantially enhanced. This not only facilitates robust generalization across diverse data categories and knowledge domains but also necessitates a novel form of annotation—prompt engineering—for qualitative model fine-tuning. This advancement creates new avenues for machine intelligence to more precisely identify, forecast, and replicate human behavior, addressing historical limitations that contribute to algorithmic inequities. Nevertheless, the voluminous and intricate nature of the data essential for training multimodal models poses significant engineering challenges, particularly with regard to bias. No consensus has yet emerged on optimal procedures for conducting this annotation work in a manner that is ethically responsible, secure, and efficient. This historical literature review traces advancements in these technologies from 2018 onward, underscores significant contributions, and identifies existing knowledge gaps and avenues for future research pertinent to the development of Transformer-based multimodal Foundation Models. An initial survey of over 724 articles yielded 156 studies that met the criteria for historical analysis; these were further narrowed down to 46 key papers spanning the years 2018–2022. The review offers valuable perspectives on the evolution of best practices, pinpoints current knowledge deficiencies, and suggests potential directions for future research. The paper includes six figures and delves into the transformation of research landscapes in the realm of machine-assisted behavioral annotation, focusing on critical issues such as bias.</p>","PeriodicalId":14361,"journal":{"name":"International Journal of Social Robotics","volume":null,"pages":null},"PeriodicalIF":4.7,"publicationDate":"2024-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140582887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-06DOI: 10.1007/s12369-024-01123-3
Abstract
As China has become the largest user of industrial robots, the need to understand how workers perceive robot-human substitution and how their perceptions influence their job behaviors is becoming increasingly crucial. This paper examined whether workers’ fear of being replaced by robots (FRR) is correlated with one aspect of job behavior: turnover intention, which refers to the extent to which an individual intends to change their job within a specific time period. Using a dataset covering 1512 manufacturing workers in Guangdong province of China, we found that workers who fear losing their jobs to robots report significantly higher turnover intention. We also found that the positive effect of FRR on turnover intention increased when robots were already utilised in the workplace. This effect was also found to be increase when workers perceived that their wages did not increase with the rise in productivity due to robotisation. Based on these findings, we provide practical recommendations to organizations on effectively addressing the turnover intention arising from the FRR.
{"title":"Fear of Being Replaced by Robots and Turnover Intention: Evidence from the Chinese Manufacturing Industry","authors":"","doi":"10.1007/s12369-024-01123-3","DOIUrl":"https://doi.org/10.1007/s12369-024-01123-3","url":null,"abstract":"<h3>Abstract</h3> <p>As China has become the largest user of industrial robots, the need to understand how workers perceive robot-human substitution and how their perceptions influence their job behaviors is becoming increasingly crucial. This paper examined whether workers’ fear of being replaced by robots (FRR) is correlated with one aspect of job behavior: turnover intention, which refers to the extent to which an individual intends to change their job within a specific time period. Using a dataset covering 1512 manufacturing workers in Guangdong province of China, we found that workers who fear losing their jobs to robots report significantly higher turnover intention. We also found that the positive effect of FRR on turnover intention increased when robots were already utilised in the workplace. This effect was also found to be increase when workers perceived that their wages did not increase with the rise in productivity due to robotisation. Based on these findings, we provide practical recommendations to organizations on effectively addressing the turnover intention arising from the FRR.</p>","PeriodicalId":14361,"journal":{"name":"International Journal of Social Robotics","volume":null,"pages":null},"PeriodicalIF":4.7,"publicationDate":"2024-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140582925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}