首页 > 最新文献

ACM Transactions on Human-Robot Interaction最新文献

英文 中文
Towards Designing Companion Robots with the End in Mind 面向终端的同伴机器人设计
IF 5.1 Q2 ROBOTICS Pub Date : 2023-03-13 DOI: 10.1145/3568294.3580046
Waki Kamino
This paper presents an early-stage idea of using 'robot death' as an integral component of human-robot interaction design for companion robots. Reviewing previous discussions around the deaths of companion robots in real-life and popular culture contexts, and analyzing the lifelike design of current companion robots in the market, the paper explores the potential advantages of designing companion robots and human-robot interaction with their 'death' in mind.
本文提出了将“机器人死亡”作为伴侣机器人人机交互设计的一个组成部分的早期想法。回顾之前在现实生活和流行文化背景下关于伴侣机器人死亡的讨论,并分析当前市场上伴侣机器人的逼真设计,本文探讨了在设计伴侣机器人和人机交互时考虑到它们的“死亡”的潜在优势。
{"title":"Towards Designing Companion Robots with the End in Mind","authors":"Waki Kamino","doi":"10.1145/3568294.3580046","DOIUrl":"https://doi.org/10.1145/3568294.3580046","url":null,"abstract":"This paper presents an early-stage idea of using 'robot death' as an integral component of human-robot interaction design for companion robots. Reviewing previous discussions around the deaths of companion robots in real-life and popular culture contexts, and analyzing the lifelike design of current companion robots in the market, the paper explores the potential advantages of designing companion robots and human-robot interaction with their 'death' in mind.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"37 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90142545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Robot Made Us Hear Each Other: Fostering Inclusive Conversations among Mixed-Visual Ability Children 机器人让我们彼此倾听:培养混合视觉能力儿童之间的包容性对话
IF 5.1 Q2 ROBOTICS Pub Date : 2023-03-13 DOI: 10.1145/3568162.3576997
Isabel Neto, Filipa Correia, Filipa Rocha, Patricia Piedade, Ana Paiva, Hugo Nicolau
Inclusion is key in group work and collaborative learning. We developed a mediator robot to support and promote inclusion in group conversations, particularly in groups composed of children with and without visual impairment. We investigate the effect of two mediation strategies on group dynamics, inclusion, and perception of the robot. We conducted a within-subjects study with 78 children, 26 experienced visual impairments, in a decision-making activity. Results indicate that the robot can foster inclusion in mixed-visual ability group conversations. The robot succeeds in balancing participation, particularly when using a highly intervening mediating strategy (directive strategy). However, children feel more heard by their peers when the robot is less intervening (organic strategy). We extend prior work on social robots to assist group work and contribute with a mediator robot that enables children with visual impairments to engage equally in group conversations. We finish by discussing design implications for inclusive social robots.
包容是小组工作和协作学习的关键。我们开发了一个调解机器人来支持和促进群体对话的包容性,特别是在有视力障碍和没有视力障碍的儿童组成的群体中。我们研究了两种调解策略对群体动力学、包容和机器人感知的影响。我们在决策活动中对78名儿童进行了研究,其中26名有视觉障碍。结果表明,机器人可以促进混合视觉能力群体对话的包容性。机器人成功地平衡了参与,特别是在使用高度干预的中介策略(指令策略)时。然而,当机器人较少干预时,孩子们更能感受到同伴的倾听(有机策略)。我们扩展了先前在社交机器人方面的工作,以协助小组工作,并贡献了一个调解机器人,使视力受损的儿童能够平等地参与小组对话。最后,我们将讨论包容性社交机器人的设计含义。
{"title":"The Robot Made Us Hear Each Other: Fostering Inclusive Conversations among Mixed-Visual Ability Children","authors":"Isabel Neto, Filipa Correia, Filipa Rocha, Patricia Piedade, Ana Paiva, Hugo Nicolau","doi":"10.1145/3568162.3576997","DOIUrl":"https://doi.org/10.1145/3568162.3576997","url":null,"abstract":"Inclusion is key in group work and collaborative learning. We developed a mediator robot to support and promote inclusion in group conversations, particularly in groups composed of children with and without visual impairment. We investigate the effect of two mediation strategies on group dynamics, inclusion, and perception of the robot. We conducted a within-subjects study with 78 children, 26 experienced visual impairments, in a decision-making activity. Results indicate that the robot can foster inclusion in mixed-visual ability group conversations. The robot succeeds in balancing participation, particularly when using a highly intervening mediating strategy (directive strategy). However, children feel more heard by their peers when the robot is less intervening (organic strategy). We extend prior work on social robots to assist group work and contribute with a mediator robot that enables children with visual impairments to engage equally in group conversations. We finish by discussing design implications for inclusive social robots.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"56 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90817996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Designing a Robot which Touches the User's Head with Intra-Hug Gestures 设计一个用拥抱手势触摸用户头部的机器人
IF 5.1 Q2 ROBOTICS Pub Date : 2023-03-13 DOI: 10.1145/3568294.3580096
Yuya Onishi, H. Sumioka, M. Shiomi
There are a lot of positive benefits of hugging, and several studies have applied its application in human-robot interaction. However, due to the limitation of a robot performance, these robots only touched the human's back. In this study, we developed a hug robot, named "Moffuly-II." This robot can hug not only with intra-hug gestures, but also touch the user's back or head. This paper describes the robot system and the user's impression of hug with the robot.
拥抱有很多积极的好处,一些研究已经将其应用于人机交互。然而,由于机器人性能的限制,这些机器人只能触摸人类的背部。在这项研究中,我们开发了一个拥抱机器人,名为“moffuli - ii”。这个机器人不仅可以用拥抱的手势拥抱,还可以触摸用户的背部或头部。本文介绍了机器人系统和用户与机器人拥抱的印象。
{"title":"Designing a Robot which Touches the User's Head with Intra-Hug Gestures","authors":"Yuya Onishi, H. Sumioka, M. Shiomi","doi":"10.1145/3568294.3580096","DOIUrl":"https://doi.org/10.1145/3568294.3580096","url":null,"abstract":"There are a lot of positive benefits of hugging, and several studies have applied its application in human-robot interaction. However, due to the limitation of a robot performance, these robots only touched the human's back. In this study, we developed a hug robot, named \"Moffuly-II.\" This robot can hug not only with intra-hug gestures, but also touch the user's back or head. This paper describes the robot system and the user's impression of hug with the robot.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"49 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90227931","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Making Music More Inclusive with Hospiano Hospiano让音乐更具包容性
IF 5.1 Q2 ROBOTICS Pub Date : 2023-03-13 DOI: 10.1145/3568294.3580184
Chacharin Lertyosbordin, Nichaput Khurukitwanit, Teeratas Asavareongchai, Sirin Liukasemsarn
Music brings people together; it is a universal language that can help us be more expressive and help us understand our feelings and emotions in a better manner. The "Hospiano" robot is a prototype developed with the goal of making music accessible to all, regardless of physical ability. The robot acts as a pianist and can be placed in hospital lobbies and wards, playing the piano in response to the gestures and facial expressions of patients (i.e. head movement, eye and mouth movement, and proximity). It has three main modes of operation: "Robot Pianist mode", in which it plays pre-existing songs; "Play Along mode", which allows anyone to interact with the music; and "Composer mode", which allows patients to create their own music. The software that controls the prototype's actions runs on the Robot Operating System (ROS). It has been proven that humans and robots can interact fluently via a robot's vision, which opens up a wide range of possibilities for further interactions between these logical machines and more emotive beings like humans, resulting in an improvement in the quality of life of people who use it, increased inclusivity, and a better world for future generations to live in.
音乐让人们走到一起;它是一种通用语言,可以帮助我们更善于表达,帮助我们更好地理解自己的感受和情绪。“Hospiano”机器人是一个原型,其目标是让所有人都能接触到音乐,无论身体状况如何。这个机器人就像一个钢琴家,可以放置在医院的大厅和病房里,根据病人的手势和面部表情(如头部运动、眼睛和嘴的运动以及接近程度)弹奏钢琴。它有三种主要的操作模式:“机器人钢琴家模式”,在这种模式下,它会播放已有的歌曲;“Play Along模式”,允许任何人与音乐互动;以及“作曲家模式”,允许患者创作自己的音乐。控制原型机动作的软件运行在机器人操作系统(ROS)上。事实证明,人类和机器人可以通过机器人的视觉流畅地互动,这为这些逻辑机器与人类等更感性的生物之间的进一步互动开辟了广泛的可能性,从而提高了使用它的人的生活质量,增加了包容性,为子孙后代创造了一个更美好的世界。
{"title":"Making Music More Inclusive with Hospiano","authors":"Chacharin Lertyosbordin, Nichaput Khurukitwanit, Teeratas Asavareongchai, Sirin Liukasemsarn","doi":"10.1145/3568294.3580184","DOIUrl":"https://doi.org/10.1145/3568294.3580184","url":null,"abstract":"Music brings people together; it is a universal language that can help us be more expressive and help us understand our feelings and emotions in a better manner. The \"Hospiano\" robot is a prototype developed with the goal of making music accessible to all, regardless of physical ability. The robot acts as a pianist and can be placed in hospital lobbies and wards, playing the piano in response to the gestures and facial expressions of patients (i.e. head movement, eye and mouth movement, and proximity). It has three main modes of operation: \"Robot Pianist mode\", in which it plays pre-existing songs; \"Play Along mode\", which allows anyone to interact with the music; and \"Composer mode\", which allows patients to create their own music. The software that controls the prototype's actions runs on the Robot Operating System (ROS). It has been proven that humans and robots can interact fluently via a robot's vision, which opens up a wide range of possibilities for further interactions between these logical machines and more emotive beings like humans, resulting in an improvement in the quality of life of people who use it, increased inclusivity, and a better world for future generations to live in.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"1 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90366005","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
People Dynamically Update Trust When Interactively Teaching Robots 交互式教学机器人时,人们动态更新信任
IF 5.1 Q2 ROBOTICS Pub Date : 2023-03-13 DOI: 10.1145/3568162.3576962
V. B. Chi, B. Malle
Human-robot trust research often measures people's trust in robots in individual scenarios. However, humans may update their trust dynamically as they continuously interact with a robot. In a well-powered study (n = 220), we investigate the trust updating process across a 15-trial interaction. In a novel paradigm, participants act in the role of teacher to a simulated robot on a smartphone-based platform, and we assess trust at multiple levels (momentary trust feelings, perceptions of trustworthiness, and intended reliance). Results reveal that people are highly sensitive to the robot's learning progress trial by trial: they take into account both previous-task performance, current-task difficulty, and cumulative learning across training. More integrative perceptions of robot trustworthiness steadily grow as people gather more evidence from observing robot performance, especially of faster-learning robots. Intended reliance on the robot in novel tasks increased only for faster-learning robots.
人-机器人信任研究经常测量人们在个别场景下对机器人的信任。然而,人类可能会随着与机器人的不断互动而动态地更新他们的信任。在一项强有力的研究(n = 220)中,我们调查了15个试验交互中的信任更新过程。在一个新颖的范例中,参与者在基于智能手机的平台上扮演模拟机器人的老师角色,我们从多个层面评估信任(瞬间信任感觉、可信度感知和预期依赖)。结果表明,人们对机器人一次又一次的学习进度非常敏感:他们会考虑之前任务的表现、当前任务的难度以及整个训练过程中的累积学习。随着人们从观察机器人的表现,尤其是学习速度更快的机器人中收集到越来越多的证据,人们对机器人可信度的综合认知也在稳步增长。只有学习速度更快的机器人在完成新任务时才会增加对机器人的预期依赖。
{"title":"People Dynamically Update Trust When Interactively Teaching Robots","authors":"V. B. Chi, B. Malle","doi":"10.1145/3568162.3576962","DOIUrl":"https://doi.org/10.1145/3568162.3576962","url":null,"abstract":"Human-robot trust research often measures people's trust in robots in individual scenarios. However, humans may update their trust dynamically as they continuously interact with a robot. In a well-powered study (n = 220), we investigate the trust updating process across a 15-trial interaction. In a novel paradigm, participants act in the role of teacher to a simulated robot on a smartphone-based platform, and we assess trust at multiple levels (momentary trust feelings, perceptions of trustworthiness, and intended reliance). Results reveal that people are highly sensitive to the robot's learning progress trial by trial: they take into account both previous-task performance, current-task difficulty, and cumulative learning across training. More integrative perceptions of robot trustworthiness steadily grow as people gather more evidence from observing robot performance, especially of faster-learning robots. Intended reliance on the robot in novel tasks increased only for faster-learning robots.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"94 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91039338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
A Persuasive Robot that Alleviates Endogenous Smartphone-related Interruption 一个有说服力的机器人,减轻了与智能手机相关的内生干扰
IF 5.1 Q2 ROBOTICS Pub Date : 2023-03-13 DOI: 10.1145/3568294.3580097
Hanyang Hu, Mengyu Chen, Ruhan Wang, Yijie Guo
The endogenous interruptions of smartphones have impacted people's everyday life in many aspects, especially in the study and work scene under a lamp. To mitigate this, we make a robot that could persuade you intrinsically by augmenting the lamp on your desk with specific posture and light. This paper will present our design considerations and the first prototype to show the possibility of alleviating people's endogenous interruptions through robots.
智能手机的内生干扰已经在很多方面影响了人们的日常生活,尤其是在灯下的学习和工作场景。为了减轻这种情况,我们制造了一个机器人,它可以通过增加你桌子上的灯的特定姿势和光线来说服你。本文将介绍我们的设计考虑和第一个原型,以展示通过机器人减轻人们内生干扰的可能性。
{"title":"A Persuasive Robot that Alleviates Endogenous Smartphone-related Interruption","authors":"Hanyang Hu, Mengyu Chen, Ruhan Wang, Yijie Guo","doi":"10.1145/3568294.3580097","DOIUrl":"https://doi.org/10.1145/3568294.3580097","url":null,"abstract":"The endogenous interruptions of smartphones have impacted people's everyday life in many aspects, especially in the study and work scene under a lamp. To mitigate this, we make a robot that could persuade you intrinsically by augmenting the lamp on your desk with specific posture and light. This paper will present our design considerations and the first prototype to show the possibility of alleviating people's endogenous interruptions through robots.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"26 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75616299","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Visuo-Textual Explanations of a Robot's Navigational Choices 机器人导航选择的视觉文本解释
IF 5.1 Q2 ROBOTICS Pub Date : 2023-03-13 DOI: 10.1145/3568294.3580141
Amar Halilovic, F. Lindner
With the rise in the number of robots in our daily lives, human-robot encounters will become more frequent. To improve human-robot interaction (HRI), people will require explanations of robots' actions, especially if they do something unexpected. Our focus is on robot navigation, where we explain why robots make specific navigational choices. Building on methods from the area of Explainable Artificial Intelligence (XAI), we employ a semantic map and techniques from the area of Qualitative Spatial Reasoning (QSR) to enrich visual explanations with knowledge-level spatial information. We outline how a robot can generate visual and textual explanations simultaneously and test our approach in simulation.
随着我们日常生活中机器人数量的增加,人机接触将变得更加频繁。为了改善人机交互(HRI),人们将需要对机器人的行为进行解释,尤其是当它们做了一些意想不到的事情时。我们的重点是机器人导航,在那里我们解释为什么机器人做出特定的导航选择。基于可解释人工智能(XAI)领域的方法,我们采用了语义图和定性空间推理(QSR)领域的技术,用知识级空间信息丰富视觉解释。我们概述了机器人如何同时生成视觉和文本解释,并在模拟中测试了我们的方法。
{"title":"Visuo-Textual Explanations of a Robot's Navigational Choices","authors":"Amar Halilovic, F. Lindner","doi":"10.1145/3568294.3580141","DOIUrl":"https://doi.org/10.1145/3568294.3580141","url":null,"abstract":"With the rise in the number of robots in our daily lives, human-robot encounters will become more frequent. To improve human-robot interaction (HRI), people will require explanations of robots' actions, especially if they do something unexpected. Our focus is on robot navigation, where we explain why robots make specific navigational choices. Building on methods from the area of Explainable Artificial Intelligence (XAI), we employ a semantic map and techniques from the area of Qualitative Spatial Reasoning (QSR) to enrich visual explanations with knowledge-level spatial information. We outline how a robot can generate visual and textual explanations simultaneously and test our approach in simulation.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"96 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76664176","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Variable Autonomy for Human-Robot Teaming (VAT) 人-机器人团队(VAT)的可变自治
IF 5.1 Q2 ROBOTICS Pub Date : 2023-03-13 DOI: 10.1145/3568294.3579957
Manolis Chiou, S. Booth, Bruno Lacerda, Andreas Theodorou, S. Rothfuss
As robots are introduced to various domains and applications, Human-Robot Teaming (HRT) capabilities are essential. Such capabilities involve teaming with humans in on out-the-loop at different levels of abstraction, leveraging the complementing capabilities of humans and robots. This requires robotic systems with the ability to dynamically vary their level or degree of autonomy to collaborate with the human(s) efficiently and overcome various challenging circumstances. Variable Autonomy (VA) is an umbrella term encompassing such research, including but not limited to shared control and shared autonomy, mixed-initiative, adjustable autonomy, and sliding autonomy. This workshop is driven by the timely need to bring together VA-related research and practices that are often disconnected across different communities as the field is relatively young. The workshop's goal is to consolidate research in VA. To this end, and given the complexity and span of Human-Robot systems, this workshop will adopt a holistic trans-disciplinary approach aiming to a) identify and classify related common challenges and opportunities; b) identify the disciplines that need to come together to tackle the challenges; c) identify and define common terminology, approaches, methodologies, benchmarks, and metrics; d) define short- and long-term research goals for the community. To achieve these objectives, this workshop aims to bring together industry stakeholders, researchers from fields under the banner of VA, and specialists from other highly related fields such as human factors and psychology. The workshop will consist of a mix of invited talks, contributed papers, and an interactive discussion panel, toward a shared vision for VA.
随着机器人被引入各种领域和应用,人机协作(HRT)能力是必不可少的。这样的能力包括在不同的抽象层次上与人类合作,利用人类和机器人的互补能力。这就要求机器人系统具有动态改变其自主水平或程度的能力,以有效地与人类合作,并克服各种具有挑战性的环境。可变自治(Variable autonomous, VA)是一个涵盖此类研究的总称,包括但不限于共享控制和共享自治、混合主动、可调节自治和滑动自治。由于该领域相对年轻,不同社区之间经常脱节的与va相关的研究和实践及时需要汇集在一起,因此推动了本次研讨会。研讨会的目标是巩固人机系统的研究。为此,鉴于人-机器人系统的复杂性和广度,本次研讨会将采用一种全面的跨学科方法,旨在a)识别和分类相关的共同挑战和机遇;B)确定需要联合起来应对挑战的学科;识别和定义通用术语、方法、方法学、基准和度量标准;D)为社区定义短期和长期的研究目标。为了实现这些目标,本次研讨会旨在汇集行业利益相关者,来自VA旗下领域的研究人员,以及其他高度相关领域(如人因和心理学)的专家。研讨会将包括邀请演讲、贡献论文和互动讨论小组,以实现VA的共同愿景。
{"title":"Variable Autonomy for Human-Robot Teaming (VAT)","authors":"Manolis Chiou, S. Booth, Bruno Lacerda, Andreas Theodorou, S. Rothfuss","doi":"10.1145/3568294.3579957","DOIUrl":"https://doi.org/10.1145/3568294.3579957","url":null,"abstract":"As robots are introduced to various domains and applications, Human-Robot Teaming (HRT) capabilities are essential. Such capabilities involve teaming with humans in on out-the-loop at different levels of abstraction, leveraging the complementing capabilities of humans and robots. This requires robotic systems with the ability to dynamically vary their level or degree of autonomy to collaborate with the human(s) efficiently and overcome various challenging circumstances. Variable Autonomy (VA) is an umbrella term encompassing such research, including but not limited to shared control and shared autonomy, mixed-initiative, adjustable autonomy, and sliding autonomy. This workshop is driven by the timely need to bring together VA-related research and practices that are often disconnected across different communities as the field is relatively young. The workshop's goal is to consolidate research in VA. To this end, and given the complexity and span of Human-Robot systems, this workshop will adopt a holistic trans-disciplinary approach aiming to a) identify and classify related common challenges and opportunities; b) identify the disciplines that need to come together to tackle the challenges; c) identify and define common terminology, approaches, methodologies, benchmarks, and metrics; d) define short- and long-term research goals for the community. To achieve these objectives, this workshop aims to bring together industry stakeholders, researchers from fields under the banner of VA, and specialists from other highly related fields such as human factors and psychology. The workshop will consist of a mix of invited talks, contributed papers, and an interactive discussion panel, toward a shared vision for VA.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"223 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76914450","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Human-Drone Interaction: Interacting with People Smoking in Prohibited Areas 人-无人机互动:与禁区内吸烟人群互动
IF 5.1 Q2 ROBOTICS Pub Date : 2023-03-13 DOI: 10.1145/3568294.3580173
Yermakhan Kassym, Saparkhan Kassymbekov, Kamila Zhumakhanova, A. Sandygulova
Drones are continually entering our daily lives by being used in a number of different applications. This creates a natural demand for better interaction ways between humans and drones. One of the possible applications that would benefit from improved interaction is the inspection of smoking in prohibited areas. We propose our own gesture of drone flight that we believe would deliver the message "not to smoke" better than the ready-made built-in gesture. To this end, we conducted a within-subject experiment involving 19 participants, where we evaluated the gestures on a drone operated through the Wizard-of-Oz interaction design. The results demonstrate that the proposed gesture was better at conveying the message compared to the built-in gesture.
无人机正在不断进入我们的日常生活,被用于许多不同的应用。这就产生了对人类和无人机之间更好的互动方式的自然需求。从改进的相互作用中获益的一个可能的应用是检查禁止区域的吸烟情况。我们提出了我们自己的无人机飞行手势,我们相信它会比现成的内置手势更好地传递“不吸烟”的信息。为此,我们进行了一项涉及19名参与者的实验,在那里我们评估了通过绿野仙踪交互设计操作的无人机上的手势。结果表明,与内置手势相比,提出的手势在传递信息方面更胜一筹。
{"title":"Human-Drone Interaction: Interacting with People Smoking in Prohibited Areas","authors":"Yermakhan Kassym, Saparkhan Kassymbekov, Kamila Zhumakhanova, A. Sandygulova","doi":"10.1145/3568294.3580173","DOIUrl":"https://doi.org/10.1145/3568294.3580173","url":null,"abstract":"Drones are continually entering our daily lives by being used in a number of different applications. This creates a natural demand for better interaction ways between humans and drones. One of the possible applications that would benefit from improved interaction is the inspection of smoking in prohibited areas. We propose our own gesture of drone flight that we believe would deliver the message \"not to smoke\" better than the ready-made built-in gesture. To this end, we conducted a within-subject experiment involving 19 participants, where we evaluated the gestures on a drone operated through the Wizard-of-Oz interaction design. The results demonstrate that the proposed gesture was better at conveying the message compared to the built-in gesture.","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"18 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79818653","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HighLight 突出
IF 5.1 Q2 ROBOTICS Pub Date : 2023-03-13 DOI: 10.5040/9781350088733.0124
Alessandro Cabrio, Negin Hashmati, Philip Rabia, Liina Tumma, Hugo Wärnberg, Sjoerd Hendriks, Mohammad Obaid
{"title":"HighLight","authors":"Alessandro Cabrio, Negin Hashmati, Philip Rabia, Liina Tumma, Hugo Wärnberg, Sjoerd Hendriks, Mohammad Obaid","doi":"10.5040/9781350088733.0124","DOIUrl":"https://doi.org/10.5040/9781350088733.0124","url":null,"abstract":"","PeriodicalId":36515,"journal":{"name":"ACM Transactions on Human-Robot Interaction","volume":"1 1","pages":""},"PeriodicalIF":5.1,"publicationDate":"2023-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79916250","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
ACM Transactions on Human-Robot Interaction
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1