首页 > 最新文献

ACM Transactions on Human-Robot Interaction最新文献

英文 中文
Towards Designing Companion Robots with the End in Mind 面向终端的同伴机器人设计
IF 5.1 Q2 Computer Science Pub Date : 2023-03-13 DOI: 10.1145/3568294.3580046
Waki Kamino
This paper presents an early-stage idea of using 'robot death' as an integral component of human-robot interaction design for companion robots. Reviewing previous discussions around the deaths of companion robots in real-life and popular culture contexts, and analyzing the lifelike design of current companion robots in the market, the paper explores the potential advantages of designing companion robots and human-robot interaction with their 'death' in mind.
本文提出了将“机器人死亡”作为伴侣机器人人机交互设计的一个组成部分的早期想法。回顾之前在现实生活和流行文化背景下关于伴侣机器人死亡的讨论,并分析当前市场上伴侣机器人的逼真设计,本文探讨了在设计伴侣机器人和人机交互时考虑到它们的“死亡”的潜在优势。
{"title":"Towards Designing Companion Robots with the End in Mind","authors":"Waki Kamino","doi":"10.1145/3568294.3580046","DOIUrl":"https://doi.org/10.1145/3568294.3580046","url":null,"abstract":"This paper presents an early-stage idea of using 'robot death' as an integral component of human-robot interaction design for companion robots. Reviewing previous discussions around the deaths of companion robots in real-life and popular culture contexts, and analyzing the lifelike design of current companion robots in the market, the paper explores the potential advantages of designing companion robots and human-robot interaction with their 'death' in mind.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90142545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Designing a Robot which Touches the User's Head with Intra-Hug Gestures 设计一个用拥抱手势触摸用户头部的机器人
IF 5.1 Q2 Computer Science Pub Date : 2023-03-13 DOI: 10.1145/3568294.3580096
Yuya Onishi, H. Sumioka, M. Shiomi
There are a lot of positive benefits of hugging, and several studies have applied its application in human-robot interaction. However, due to the limitation of a robot performance, these robots only touched the human's back. In this study, we developed a hug robot, named "Moffuly-II." This robot can hug not only with intra-hug gestures, but also touch the user's back or head. This paper describes the robot system and the user's impression of hug with the robot.
拥抱有很多积极的好处,一些研究已经将其应用于人机交互。然而,由于机器人性能的限制,这些机器人只能触摸人类的背部。在这项研究中,我们开发了一个拥抱机器人,名为“moffuli - ii”。这个机器人不仅可以用拥抱的手势拥抱,还可以触摸用户的背部或头部。本文介绍了机器人系统和用户与机器人拥抱的印象。
{"title":"Designing a Robot which Touches the User's Head with Intra-Hug Gestures","authors":"Yuya Onishi, H. Sumioka, M. Shiomi","doi":"10.1145/3568294.3580096","DOIUrl":"https://doi.org/10.1145/3568294.3580096","url":null,"abstract":"There are a lot of positive benefits of hugging, and several studies have applied its application in human-robot interaction. However, due to the limitation of a robot performance, these robots only touched the human's back. In this study, we developed a hug robot, named \"Moffuly-II.\" This robot can hug not only with intra-hug gestures, but also touch the user's back or head. This paper describes the robot system and the user's impression of hug with the robot.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90227931","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On Using Social Signals to Enable Flexible Error-Aware HRI 利用社会信号实现灵活的错误感知HRI
IF 5.1 Q2 Computer Science Pub Date : 2023-03-13 DOI: 10.1145/3568162.3576990
Maia Stiber, R. Taylor, Chien-Ming Huang
Prior error management techniques often do not possess the versatility to appropriately address robot errors across tasks and scenarios. Their fundamental framework involves explicit, manual error management and implicit domain-specific information driven error management, tailoring their response for specific interaction contexts. We present a framework for approaching error-aware systems by adding implicit social signals as another information channel to create more flexibility in application. To support this notion, we introduce a novel dataset (composed of three data collections) with a focus on understanding natural facial action unit (AU) responses to robot errors during physical-based human-robot interactions---varying across task, error, people, and scenario. Analysis of the dataset reveals that, through the lens of error detection, using AUs as input into error management affords flexibility to the system and has the potential to improve error detection response rate. In addition, we provide an example real-time interactive robot error management system using the error-aware framework.
先前的错误管理技术通常不具备跨任务和场景适当处理机器人错误的通用性。它们的基本框架包括显式的手动错误管理和隐式的特定于领域的信息驱动的错误管理,为特定的交互上下文定制它们的响应。我们提出了一个框架,通过增加隐式社会信号作为另一个信息通道来接近错误感知系统,以创造更大的应用灵活性。为了支持这一概念,我们引入了一个新的数据集(由三个数据集组成),重点是理解基于物理的人机交互过程中对机器人错误的自然面部动作单元(AU)响应——在任务、错误、人员和场景之间变化。对数据集的分析表明,从错误检测的角度来看,使用au作为错误管理的输入为系统提供了灵活性,并有可能提高错误检测响应率。此外,我们还提供了一个使用错误感知框架的实时交互式机器人错误管理系统示例。
{"title":"On Using Social Signals to Enable Flexible Error-Aware HRI","authors":"Maia Stiber, R. Taylor, Chien-Ming Huang","doi":"10.1145/3568162.3576990","DOIUrl":"https://doi.org/10.1145/3568162.3576990","url":null,"abstract":"Prior error management techniques often do not possess the versatility to appropriately address robot errors across tasks and scenarios. Their fundamental framework involves explicit, manual error management and implicit domain-specific information driven error management, tailoring their response for specific interaction contexts. We present a framework for approaching error-aware systems by adding implicit social signals as another information channel to create more flexibility in application. To support this notion, we introduce a novel dataset (composed of three data collections) with a focus on understanding natural facial action unit (AU) responses to robot errors during physical-based human-robot interactions---varying across task, error, people, and scenario. Analysis of the dataset reveals that, through the lens of error detection, using AUs as input into error management affords flexibility to the system and has the potential to improve error detection response rate. In addition, we provide an example real-time interactive robot error management system using the error-aware framework.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79274916","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Robot-Supported Information Search: Which Conversational Interaction Style do Children Prefer? 机器人支持的信息搜索:儿童更喜欢哪种会话交互方式?
IF 5.1 Q2 Computer Science Pub Date : 2023-03-13 DOI: 10.1145/3568294.3580128
Suyash Sharma, T. Beelen, K. Truong
Searching via speech with a robot can be used to better support children in expressing their information needs. We report on an exploratory study where children (N=35) worked on search tasks with two robots using different interaction styles. One system posed closed, yes/no questions and was more system-driven while the other system used open-ended questions and was more user-driven. We studied children's preferences and experiences of these interaction styles using questionnaires and semi-structured interviews. We found no overall strong preference between the interaction styles. However, some children reported task-dependent preferences. We further report on children's interpretation and reasoning around interaction styles for robots supporting information search.
通过机器人语音搜索可以更好地支持儿童表达他们的信息需求。我们报告了一项探索性研究,其中儿童(N=35)与使用不同交互风格的两个机器人一起完成搜索任务。一个系统提出封闭的是/否问题,更多的是系统驱动,而另一个系统使用开放式问题,更多的是用户驱动。我们通过问卷调查和半结构化访谈来研究儿童对这些互动方式的偏好和体验。我们发现在交互风格之间没有明显的偏好。然而,一些孩子报告了任务依赖的偏好。我们进一步报告了儿童对支持信息搜索的机器人交互风格的解释和推理。
{"title":"Robot-Supported Information Search: Which Conversational Interaction Style do Children Prefer?","authors":"Suyash Sharma, T. Beelen, K. Truong","doi":"10.1145/3568294.3580128","DOIUrl":"https://doi.org/10.1145/3568294.3580128","url":null,"abstract":"Searching via speech with a robot can be used to better support children in expressing their information needs. We report on an exploratory study where children (N=35) worked on search tasks with two robots using different interaction styles. One system posed closed, yes/no questions and was more system-driven while the other system used open-ended questions and was more user-driven. We studied children's preferences and experiences of these interaction styles using questionnaires and semi-structured interviews. We found no overall strong preference between the interaction styles. However, some children reported task-dependent preferences. We further report on children's interpretation and reasoning around interaction styles for robots supporting information search.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84693712","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Effects of Predictive Robot Eyes on Trust and Task Performance in an Industrial Cooperation Task 预测机器人眼对工业协作任务中信任和任务绩效的影响
IF 5.1 Q2 Computer Science Pub Date : 2023-03-13 DOI: 10.1145/3568294.3580123
L. Onnasch, Paul Schweidler, Maximilian Wieser
Industrial cobots can perform variable action sequences. For human-robot interaction (HRI) this can have detrimental effects, as the robot's actions can be difficult to predict. In human interaction, eye gaze intuitively directs attention and communicates subsequent actions. Whether this mechanism can benefit HRI, too, is not well understood. This study investigated the impact of anthropomorphic eyes as directional cues in robot design. 42 participants worked on two subsequent tasks in an embodied HRI with a Sawyer robot. The study used a between-subject design and presented either anthropomorphic eyes, arrows or a black screen as control condition on the robot's display. Results showed that neither directional stimuli nor the anthropomorphic design in particular led to increased trust. But anthropomorphic robot eyes improved the prediction speed, whereas this effect could not be found for non-anthropomorphic cues (arrows). Anthropomorphic eyes therefore seem to be better suitable for an implementation on an industrial robot.
工业协作机器人可以执行各种动作序列。对于人机交互(HRI)来说,这可能会产生有害的影响,因为机器人的行为很难预测。在人类互动中,眼睛的凝视直观地引导注意力并传达后续行动。这种机制是否也能使HRI受益,目前还不清楚。本研究探讨了拟人化眼睛作为机器人设计方向线索的影响。42名参与者与Sawyer机器人一起在嵌入式HRI中完成了两个后续任务。该研究采用了受试者之间的设计,并在机器人的显示器上展示拟人化的眼睛、箭头或黑屏作为控制条件。结果表明,定向刺激和拟人化设计都不能增加信任。但拟人化的机器人眼睛提高了预测速度,而非拟人化的线索(箭头)则没有这种效果。因此,拟人化的眼睛似乎更适合在工业机器人上实现。
{"title":"Effects of Predictive Robot Eyes on Trust and Task Performance in an Industrial Cooperation Task","authors":"L. Onnasch, Paul Schweidler, Maximilian Wieser","doi":"10.1145/3568294.3580123","DOIUrl":"https://doi.org/10.1145/3568294.3580123","url":null,"abstract":"Industrial cobots can perform variable action sequences. For human-robot interaction (HRI) this can have detrimental effects, as the robot's actions can be difficult to predict. In human interaction, eye gaze intuitively directs attention and communicates subsequent actions. Whether this mechanism can benefit HRI, too, is not well understood. This study investigated the impact of anthropomorphic eyes as directional cues in robot design. 42 participants worked on two subsequent tasks in an embodied HRI with a Sawyer robot. The study used a between-subject design and presented either anthropomorphic eyes, arrows or a black screen as control condition on the robot's display. Results showed that neither directional stimuli nor the anthropomorphic design in particular led to increased trust. But anthropomorphic robot eyes improved the prediction speed, whereas this effect could not be found for non-anthropomorphic cues (arrows). Anthropomorphic eyes therefore seem to be better suitable for an implementation on an industrial robot.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88185611","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
People Dynamically Update Trust When Interactively Teaching Robots 交互式教学机器人时,人们动态更新信任
IF 5.1 Q2 Computer Science Pub Date : 2023-03-13 DOI: 10.1145/3568162.3576962
V. B. Chi, B. Malle
Human-robot trust research often measures people's trust in robots in individual scenarios. However, humans may update their trust dynamically as they continuously interact with a robot. In a well-powered study (n = 220), we investigate the trust updating process across a 15-trial interaction. In a novel paradigm, participants act in the role of teacher to a simulated robot on a smartphone-based platform, and we assess trust at multiple levels (momentary trust feelings, perceptions of trustworthiness, and intended reliance). Results reveal that people are highly sensitive to the robot's learning progress trial by trial: they take into account both previous-task performance, current-task difficulty, and cumulative learning across training. More integrative perceptions of robot trustworthiness steadily grow as people gather more evidence from observing robot performance, especially of faster-learning robots. Intended reliance on the robot in novel tasks increased only for faster-learning robots.
人-机器人信任研究经常测量人们在个别场景下对机器人的信任。然而,人类可能会随着与机器人的不断互动而动态地更新他们的信任。在一项强有力的研究(n = 220)中,我们调查了15个试验交互中的信任更新过程。在一个新颖的范例中,参与者在基于智能手机的平台上扮演模拟机器人的老师角色,我们从多个层面评估信任(瞬间信任感觉、可信度感知和预期依赖)。结果表明,人们对机器人一次又一次的学习进度非常敏感:他们会考虑之前任务的表现、当前任务的难度以及整个训练过程中的累积学习。随着人们从观察机器人的表现,尤其是学习速度更快的机器人中收集到越来越多的证据,人们对机器人可信度的综合认知也在稳步增长。只有学习速度更快的机器人在完成新任务时才会增加对机器人的预期依赖。
{"title":"People Dynamically Update Trust When Interactively Teaching Robots","authors":"V. B. Chi, B. Malle","doi":"10.1145/3568162.3576962","DOIUrl":"https://doi.org/10.1145/3568162.3576962","url":null,"abstract":"Human-robot trust research often measures people's trust in robots in individual scenarios. However, humans may update their trust dynamically as they continuously interact with a robot. In a well-powered study (n = 220), we investigate the trust updating process across a 15-trial interaction. In a novel paradigm, participants act in the role of teacher to a simulated robot on a smartphone-based platform, and we assess trust at multiple levels (momentary trust feelings, perceptions of trustworthiness, and intended reliance). Results reveal that people are highly sensitive to the robot's learning progress trial by trial: they take into account both previous-task performance, current-task difficulty, and cumulative learning across training. More integrative perceptions of robot trustworthiness steadily grow as people gather more evidence from observing robot performance, especially of faster-learning robots. Intended reliance on the robot in novel tasks increased only for faster-learning robots.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91039338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Reactive Planning for Coordinated Handover of an Autonomous Aerial Manipulator 自主航空机械臂协调交接的反应性规划
IF 5.1 Q2 Computer Science Pub Date : 2023-03-13 DOI: 10.1145/3568294.3580055
Jérôme Truc, D. Sidobre, R. Alami
In this paper, we present a coordinated and reactive human-aware motion planner for performing a handover task by an autonomous aerial manipulator (AAM). We present a method to determine the final state of the AAM for a handover task based on the current state of the human and the surrounding obstacles. We consider the visual field of the human and the effort to turn the head and see the AAM as well as the discomfort caused to the human. We apply these social constraints together with the kinematic constraints of the AAM to determine its coordinated motion along the trajectory.
在本文中,我们提出了一个协调和反应的人类感知运动规划器,用于执行自主空中机械臂(AAM)的切换任务。我们提出了一种基于人的当前状态和周围障碍物来确定交接任务的AAM最终状态的方法。我们考虑了人的视野和努力转过头来看到AAM以及对人造成的不适。我们将这些社会约束与AAM的运动学约束结合起来确定其沿轨迹的协调运动。
{"title":"Reactive Planning for Coordinated Handover of an Autonomous Aerial Manipulator","authors":"Jérôme Truc, D. Sidobre, R. Alami","doi":"10.1145/3568294.3580055","DOIUrl":"https://doi.org/10.1145/3568294.3580055","url":null,"abstract":"In this paper, we present a coordinated and reactive human-aware motion planner for performing a handover task by an autonomous aerial manipulator (AAM). We present a method to determine the final state of the AAM for a handover task based on the current state of the human and the surrounding obstacles. We consider the visual field of the human and the effort to turn the head and see the AAM as well as the discomfort caused to the human. We apply these social constraints together with the kinematic constraints of the AAM to determine its coordinated motion along the trajectory.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91306181","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Robot Made Us Hear Each Other: Fostering Inclusive Conversations among Mixed-Visual Ability Children 机器人让我们彼此倾听:培养混合视觉能力儿童之间的包容性对话
IF 5.1 Q2 Computer Science Pub Date : 2023-03-13 DOI: 10.1145/3568162.3576997
Isabel Neto, Filipa Correia, Filipa Rocha, Patricia Piedade, Ana Paiva, Hugo Nicolau
Inclusion is key in group work and collaborative learning. We developed a mediator robot to support and promote inclusion in group conversations, particularly in groups composed of children with and without visual impairment. We investigate the effect of two mediation strategies on group dynamics, inclusion, and perception of the robot. We conducted a within-subjects study with 78 children, 26 experienced visual impairments, in a decision-making activity. Results indicate that the robot can foster inclusion in mixed-visual ability group conversations. The robot succeeds in balancing participation, particularly when using a highly intervening mediating strategy (directive strategy). However, children feel more heard by their peers when the robot is less intervening (organic strategy). We extend prior work on social robots to assist group work and contribute with a mediator robot that enables children with visual impairments to engage equally in group conversations. We finish by discussing design implications for inclusive social robots.
包容是小组工作和协作学习的关键。我们开发了一个调解机器人来支持和促进群体对话的包容性,特别是在有视力障碍和没有视力障碍的儿童组成的群体中。我们研究了两种调解策略对群体动力学、包容和机器人感知的影响。我们在决策活动中对78名儿童进行了研究,其中26名有视觉障碍。结果表明,机器人可以促进混合视觉能力群体对话的包容性。机器人成功地平衡了参与,特别是在使用高度干预的中介策略(指令策略)时。然而,当机器人较少干预时,孩子们更能感受到同伴的倾听(有机策略)。我们扩展了先前在社交机器人方面的工作,以协助小组工作,并贡献了一个调解机器人,使视力受损的儿童能够平等地参与小组对话。最后,我们将讨论包容性社交机器人的设计含义。
{"title":"The Robot Made Us Hear Each Other: Fostering Inclusive Conversations among Mixed-Visual Ability Children","authors":"Isabel Neto, Filipa Correia, Filipa Rocha, Patricia Piedade, Ana Paiva, Hugo Nicolau","doi":"10.1145/3568162.3576997","DOIUrl":"https://doi.org/10.1145/3568162.3576997","url":null,"abstract":"Inclusion is key in group work and collaborative learning. We developed a mediator robot to support and promote inclusion in group conversations, particularly in groups composed of children with and without visual impairment. We investigate the effect of two mediation strategies on group dynamics, inclusion, and perception of the robot. We conducted a within-subjects study with 78 children, 26 experienced visual impairments, in a decision-making activity. Results indicate that the robot can foster inclusion in mixed-visual ability group conversations. The robot succeeds in balancing participation, particularly when using a highly intervening mediating strategy (directive strategy). However, children feel more heard by their peers when the robot is less intervening (organic strategy). We extend prior work on social robots to assist group work and contribute with a mediator robot that enables children with visual impairments to engage equally in group conversations. We finish by discussing design implications for inclusive social robots.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90817996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Perception-Intention-Action Cycle as a Human Acceptable Way for Improving Human-Robot Collaborative Tasks 感知-意图-行动循环是改善人机协作任务的人类可接受方式
IF 5.1 Q2 Computer Science Pub Date : 2023-03-13 DOI: 10.1145/3568294.3580149
J. E. Domínguez-Vidal, Nicolás Rodríguez, A. Sanfeliu
In Human-Robot Collaboration (HRC) tasks, the classical Perception-Action cycle can not fully explain the collaborative behaviour of the human-robot pair until it is extended to Perception-Intention-Action (PIA) cycle, giving to the human's intention a key role at the same level of the robot's perception and not as a subblock of this. Although part of the human's intention can be perceived or inferred by the other agent, this is prone to misunderstandings so the true intention has to be explicitly informed in some cases to fulfill the task. Here, we explore both types of intention and we combine them with the robot's perception through the concept of Situation Awareness (SA). We validate the PIA cycle and its acceptance by the user with a preliminary experiment in an object transportation task showing that its usage can increase trust in the robot.
在人机协作(HRC)任务中,经典的感知-行动循环不能完全解释人机对的协作行为,直到将其扩展到感知-意图-行动(PIA)循环,使人的意图在机器人感知的同一层面上发挥关键作用,而不是作为其中的子块。尽管人类的部分意图可以被其他智能体感知或推断,但这很容易产生误解,因此在某些情况下,为了完成任务,必须明确告知真实意图。在这里,我们探讨了这两种类型的意图,并通过情境感知(SA)的概念将它们与机器人的感知结合起来。我们通过一个物体运输任务的初步实验验证了PIA周期及其被用户接受的程度,表明它的使用可以增加对机器人的信任。
{"title":"Perception-Intention-Action Cycle as a Human Acceptable Way for Improving Human-Robot Collaborative Tasks","authors":"J. E. Domínguez-Vidal, Nicolás Rodríguez, A. Sanfeliu","doi":"10.1145/3568294.3580149","DOIUrl":"https://doi.org/10.1145/3568294.3580149","url":null,"abstract":"In Human-Robot Collaboration (HRC) tasks, the classical Perception-Action cycle can not fully explain the collaborative behaviour of the human-robot pair until it is extended to Perception-Intention-Action (PIA) cycle, giving to the human's intention a key role at the same level of the robot's perception and not as a subblock of this. Although part of the human's intention can be perceived or inferred by the other agent, this is prone to misunderstandings so the true intention has to be explicitly informed in some cases to fulfill the task. Here, we explore both types of intention and we combine them with the robot's perception through the concept of Situation Awareness (SA). We validate the PIA cycle and its acceptance by the user with a preliminary experiment in an object transportation task showing that its usage can increase trust in the robot.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89696048","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Making Music More Inclusive with Hospiano Hospiano让音乐更具包容性
IF 5.1 Q2 Computer Science Pub Date : 2023-03-13 DOI: 10.1145/3568294.3580184
Chacharin Lertyosbordin, Nichaput Khurukitwanit, Teeratas Asavareongchai, Sirin Liukasemsarn
Music brings people together; it is a universal language that can help us be more expressive and help us understand our feelings and emotions in a better manner. The "Hospiano" robot is a prototype developed with the goal of making music accessible to all, regardless of physical ability. The robot acts as a pianist and can be placed in hospital lobbies and wards, playing the piano in response to the gestures and facial expressions of patients (i.e. head movement, eye and mouth movement, and proximity). It has three main modes of operation: "Robot Pianist mode", in which it plays pre-existing songs; "Play Along mode", which allows anyone to interact with the music; and "Composer mode", which allows patients to create their own music. The software that controls the prototype's actions runs on the Robot Operating System (ROS). It has been proven that humans and robots can interact fluently via a robot's vision, which opens up a wide range of possibilities for further interactions between these logical machines and more emotive beings like humans, resulting in an improvement in the quality of life of people who use it, increased inclusivity, and a better world for future generations to live in.
音乐让人们走到一起;它是一种通用语言,可以帮助我们更善于表达,帮助我们更好地理解自己的感受和情绪。“Hospiano”机器人是一个原型,其目标是让所有人都能接触到音乐,无论身体状况如何。这个机器人就像一个钢琴家,可以放置在医院的大厅和病房里,根据病人的手势和面部表情(如头部运动、眼睛和嘴的运动以及接近程度)弹奏钢琴。它有三种主要的操作模式:“机器人钢琴家模式”,在这种模式下,它会播放已有的歌曲;“Play Along模式”,允许任何人与音乐互动;以及“作曲家模式”,允许患者创作自己的音乐。控制原型机动作的软件运行在机器人操作系统(ROS)上。事实证明,人类和机器人可以通过机器人的视觉流畅地互动,这为这些逻辑机器与人类等更感性的生物之间的进一步互动开辟了广泛的可能性,从而提高了使用它的人的生活质量,增加了包容性,为子孙后代创造了一个更美好的世界。
{"title":"Making Music More Inclusive with Hospiano","authors":"Chacharin Lertyosbordin, Nichaput Khurukitwanit, Teeratas Asavareongchai, Sirin Liukasemsarn","doi":"10.1145/3568294.3580184","DOIUrl":"https://doi.org/10.1145/3568294.3580184","url":null,"abstract":"Music brings people together; it is a universal language that can help us be more expressive and help us understand our feelings and emotions in a better manner. The \"Hospiano\" robot is a prototype developed with the goal of making music accessible to all, regardless of physical ability. The robot acts as a pianist and can be placed in hospital lobbies and wards, playing the piano in response to the gestures and facial expressions of patients (i.e. head movement, eye and mouth movement, and proximity). It has three main modes of operation: \"Robot Pianist mode\", in which it plays pre-existing songs; \"Play Along mode\", which allows anyone to interact with the music; and \"Composer mode\", which allows patients to create their own music. The software that controls the prototype's actions runs on the Robot Operating System (ROS). It has been proven that humans and robots can interact fluently via a robot's vision, which opens up a wide range of possibilities for further interactions between these logical machines and more emotive beings like humans, resulting in an improvement in the quality of life of people who use it, increased inclusivity, and a better world for future generations to live in.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":null,"pages":null},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90366005","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
ACM Transactions on Human-Robot Interaction
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1