Shared control is an exciting up-and-coming engineering field that is blending the boundaries of control, where humans interact with robots or vehicles that are partly automated. The main challenges for robotics and automation today are posed by the less structured, unpredictable environments, in which humans operate naturally, especially when they need to interact with other humans. As a result, many semi-automated systems today need to be supervised by human operators, but as fast as the levels of automation of many systems are increasing, we also need to speed up how we think about what this implies for human-machine interaction. The widely applied paradigm of human-centered automation has been popular and useful for the past two decades, but it requires an update with the current trend in automation becoming more ubiquitous in human environments and less dependent on explicit input from the human operator. In contrast to the supervisory control paradigm---where control is traded between human and machine---the shared control paradigm implicitly assumes the interaction between two or more independent agents that together perform a task to achieve a common goal. This implies that the design of the shared control system is not necessarily only human centered. In shared control systems, all the acting agents need to be aware of the others' capabilities, weaknesses, and authority. Hence, reciprocal communication of each agent's operational boundaries, whether human or machine, is essential.
{"title":"Introduction to the special issue on shared control","authors":"M. Mulder, D. Abbink, T. Carlson","doi":"10.5898/JHRI.4.3.Mulder","DOIUrl":"https://doi.org/10.5898/JHRI.4.3.Mulder","url":null,"abstract":"Shared control is an exciting up-and-coming engineering field that is blending the boundaries of control, where humans interact with robots or vehicles that are partly automated. The main challenges for robotics and automation today are posed by the less structured, unpredictable environments, in which humans operate naturally, especially when they need to interact with other humans. As a result, many semi-automated systems today need to be supervised by human operators, but as fast as the levels of automation of many systems are increasing, we also need to speed up how we think about what this implies for human-machine interaction. The widely applied paradigm of human-centered automation has been popular and useful for the past two decades, but it requires an update with the current trend in automation becoming more ubiquitous in human environments and less dependent on explicit input from the human operator. In contrast to the supervisory control paradigm---where control is traded between human and machine---the shared control paradigm implicitly assumes the interaction between two or more independent agents that together perform a task to achieve a common goal. This implies that the design of the shared control system is not necessarily only human centered. In shared control systems, all the acting agents need to be aware of the others' capabilities, weaknesses, and authority. Hence, reciprocal communication of each agent's operational boundaries, whether human or machine, is essential.","PeriodicalId":92076,"journal":{"name":"Journal of human-robot interaction","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.5898/JHRI.4.3.Mulder","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71218248","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-12-06DOI: 10.5898/JHRI.4.3.Rakhsha
Ramtin Rakhsha, D. Constantinescu
Proportional-derivative (PD) control is often used to coordinate the two copies of the virtual environment in distributed two-users networked haptic cooperation. However, a distributed PD controller designed for force interactions between two users may destabilize the haptic cooperation among multiple users because the effective coordination gain for each local copy of the virtual environment increases with the participant count. This paper proposes the average position (AP) strategy to upper bound the effective stiffness for the shared virtual object (SVO) coordination and, thus, to increase the stability of distributed multi-user haptic cooperation. The paper first motivates the AP strategy via continuous-time analysis of the autonomous dynamics of an SVO distributed among N users connected across a network with infinite bandwidth and no communication delay. We then investigate the effect of AP coordination on distributed multi-user haptic interactions over a network with limited bandwidth and constant and small communication delay via multi-rate stability and performance analyses of cooperative manipulations of an SVO by up to five operators. The paper shows that AP coordination: (1) has bounded effective coordination gain; (2) increases the stability region of distributed multi-user haptic cooperation compared to conventional PD coordination; and (3) renders less viscous SVO dynamics to operators than PD coordination. Three-users experimental manipulations of a shared virtual cube validate the analysis.
{"title":"Average-position coordination for distributed multi-user networked haptic cooperation","authors":"Ramtin Rakhsha, D. Constantinescu","doi":"10.5898/JHRI.4.3.Rakhsha","DOIUrl":"https://doi.org/10.5898/JHRI.4.3.Rakhsha","url":null,"abstract":"Proportional-derivative (PD) control is often used to coordinate the two copies of the virtual environment in distributed two-users networked haptic cooperation. However, a distributed PD controller designed for force interactions between two users may destabilize the haptic cooperation among multiple users because the effective coordination gain for each local copy of the virtual environment increases with the participant count. This paper proposes the average position (AP) strategy to upper bound the effective stiffness for the shared virtual object (SVO) coordination and, thus, to increase the stability of distributed multi-user haptic cooperation. The paper first motivates the AP strategy via continuous-time analysis of the autonomous dynamics of an SVO distributed among N users connected across a network with infinite bandwidth and no communication delay. We then investigate the effect of AP coordination on distributed multi-user haptic interactions over a network with limited bandwidth and constant and small communication delay via multi-rate stability and performance analyses of cooperative manipulations of an SVO by up to five operators. The paper shows that AP coordination: (1) has bounded effective coordination gain; (2) increases the stability region of distributed multi-user haptic cooperation compared to conventional PD coordination; and (3) renders less viscous SVO dynamics to operators than PD coordination. Three-users experimental manipulations of a shared virtual cube validate the analysis.","PeriodicalId":92076,"journal":{"name":"Journal of human-robot interaction","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.5898/JHRI.4.3.Rakhsha","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71218264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-09-01DOI: 10.5898/JHRI.4.2.Reveleau
Aurélien Reveleau, François Ferland, M. Labbé, D. Létourneau, F. Michaud
Commercial telepresence robots provide video, audio, and proximity data to remote operators through a teleoperation user interface running on standard computing devices. As new modalities such as force sensing and sound localization are being developed and tested on advanced robotic platforms, ways to integrate such information on a teleoperation interface are required. This paper demonstrates the use of visual representations of forces and sound localization in a 3D teleoperation interface. Forces are represented using colors, size, bar graphs and arrows, while speech or ring bubbles are used to represents sound positions and types. Validation of these modalities is done with 31 participants using IRL-1/TR, a humanoid platform equipped with differential elastic actuators to provide compliance and force control of its arms and capable of sound source localization. Results suggest that visual representations of interaction force and sound source can provide appropriately useful information to remote operators.
{"title":"Visual representation of interaction force and sound source in a teleoperation user interface for a mobile robot","authors":"Aurélien Reveleau, François Ferland, M. Labbé, D. Létourneau, F. Michaud","doi":"10.5898/JHRI.4.2.Reveleau","DOIUrl":"https://doi.org/10.5898/JHRI.4.2.Reveleau","url":null,"abstract":"Commercial telepresence robots provide video, audio, and proximity data to remote operators through a teleoperation user interface running on standard computing devices. As new modalities such as force sensing and sound localization are being developed and tested on advanced robotic platforms, ways to integrate such information on a teleoperation interface are required. This paper demonstrates the use of visual representations of forces and sound localization in a 3D teleoperation interface. Forces are represented using colors, size, bar graphs and arrows, while speech or ring bubbles are used to represents sound positions and types. Validation of these modalities is done with 31 participants using IRL-1/TR, a humanoid platform equipped with differential elastic actuators to provide compliance and force control of its arms and capable of sound source localization. Results suggest that visual representations of interaction force and sound source can provide appropriately useful information to remote operators.","PeriodicalId":92076,"journal":{"name":"Journal of human-robot interaction","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.5898/JHRI.4.2.Reveleau","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71218185","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-09-01DOI: 10.5898/JHRI.4.2.Dominguez
Cynthia O. Dominguez, Robert V Strouse, E. Papautsky, Brian Moon
This paper reports on a research project that combined cognitive task analysis (CTA) methods with innovative design processes to develop a handheld device application enabling a non-aviator to interact with a highly autonomous resupply helicopter. In recent military operations, unmanned helicopters have been used to resupply U.S. Marines at remote forward operating bases (FOBs) and combat outposts (COPs). This use of unmanned systems saves lives by eliminating the need to drive through high-risk areas for routine resupply. The U.S. Navy is investing in research to improve the autonomy of these systems and the design of interfaces to enable a non-aviator Marine to safely and successfully interact with an incoming resupply helicopter using a simple, intuitive handheld device application. In this research, we collected data from multiple stakeholders to develop requirements, use cases, and design storyboards that have been implemented and demonstrated during flight tests in early 2014.
{"title":"Cognitive design of an application enabling remote bases to receive unmanned helicopter resupply","authors":"Cynthia O. Dominguez, Robert V Strouse, E. Papautsky, Brian Moon","doi":"10.5898/JHRI.4.2.Dominguez","DOIUrl":"https://doi.org/10.5898/JHRI.4.2.Dominguez","url":null,"abstract":"This paper reports on a research project that combined cognitive task analysis (CTA) methods with innovative design processes to develop a handheld device application enabling a non-aviator to interact with a highly autonomous resupply helicopter. In recent military operations, unmanned helicopters have been used to resupply U.S. Marines at remote forward operating bases (FOBs) and combat outposts (COPs). This use of unmanned systems saves lives by eliminating the need to drive through high-risk areas for routine resupply. The U.S. Navy is investing in research to improve the autonomy of these systems and the design of interfaces to enable a non-aviator Marine to safely and successfully interact with an incoming resupply helicopter using a simple, intuitive handheld device application. In this research, we collected data from multiple stakeholders to develop requirements, use cases, and design storyboards that have been implemented and demonstrated during flight tests in early 2014.","PeriodicalId":92076,"journal":{"name":"Journal of human-robot interaction","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.5898/JHRI.4.2.Dominguez","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71218146","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The search for invariants is a fundamental aim of scientific endeavors. These invariants, such as Newton's laws of motion, allow us to model and predict the behavior of systems across many different problems. In the nascent field of Human-Swarm Interaction (HSI), a systematic identification of fundamental invariants is still lacking. Discovering and formalizing these invariants will provide a foundation for developing, and better understanding, effective methods for HSI. We propose two invariants underlying HSI for geometric-based swarms: (1) collective state is the fundamental percept associated with a bio-inspired swarm, and (2) a human's ability to influence and understand the collective state of a swarm is determined by the balance between the span and persistence. We provide evidence of these invariants by synthesizing much of our previous work in the area of HSI with several new results, including a novel user study where users manage multiple swarms simultaneously. We also discuss how these invariants can be applied to enable more efficient and successful teaming between humans and bio-inspired collectives and identify several promising directions for future research into the invariants of HSI.
{"title":"Two invariants of human-swarm interaction","authors":"Daniel S. Brown, M. Goodrich, S. Jung, S. Kerman","doi":"10.5898/JHRI.5.1.Brown","DOIUrl":"https://doi.org/10.5898/JHRI.5.1.Brown","url":null,"abstract":"The search for invariants is a fundamental aim of scientific endeavors. These invariants, such as Newton's laws of motion, allow us to model and predict the behavior of systems across many different problems. In the nascent field of Human-Swarm Interaction (HSI), a systematic identification of fundamental invariants is still lacking. Discovering and formalizing these invariants will provide a foundation for developing, and better understanding, effective methods for HSI. We propose two invariants underlying HSI for geometric-based swarms: (1) collective state is the fundamental percept associated with a bio-inspired swarm, and (2) a human's ability to influence and understand the collective state of a swarm is determined by the balance between the span and persistence. We provide evidence of these invariants by synthesizing much of our previous work in the area of HSI with several new results, including a novel user study where users manage multiple swarms simultaneously. We also discuss how these invariants can be applied to enable more efficient and successful teaming between humans and bio-inspired collectives and identify several promising directions for future research into the invariants of HSI.","PeriodicalId":92076,"journal":{"name":"Journal of human-robot interaction","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.5898/JHRI.5.1.Brown","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71218371","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Daniel J. Brooks, K. Tsui, M. Lunderville, H. Yanco
A significant amount of research has been conducted regarding the technical aspects of haptic feedback. However, the design of effective haptic feedback behaviors for controlling ground-based mobile robots is not yet well understood from a human-robot interaction perspective. Past research of haptic feedback behaviors for mobile robots has sometimes made use of control paradigms that do not appropriately map to teleoperation or supervision tasks. Furthermore, evaluation of haptic behaviors has not been systematic and often only demonstrates feasibility. As a result, comparing various techniques is difficult. In this article, we focus on how haptic control research could be improved in the domain of teleoperation and supervision of ground-based mobile robots through the introduction of a haptic evaluation toolkit.
{"title":"Methods for evaluating and comparing the use of haptic feedback in human-robot interaction with ground-based mobile robots","authors":"Daniel J. Brooks, K. Tsui, M. Lunderville, H. Yanco","doi":"10.5898/JHRI.4.1.Brooks","DOIUrl":"https://doi.org/10.5898/JHRI.4.1.Brooks","url":null,"abstract":"A significant amount of research has been conducted regarding the technical aspects of haptic feedback. However, the design of effective haptic feedback behaviors for controlling ground-based mobile robots is not yet well understood from a human-robot interaction perspective. Past research of haptic feedback behaviors for mobile robots has sometimes made use of control paradigms that do not appropriately map to teleoperation or supervision tasks. Furthermore, evaluation of haptic behaviors has not been systematic and often only demonstrates feasibility. As a result, comparing various techniques is difficult. In this article, we focus on how haptic control research could be improved in the domain of teleoperation and supervision of ground-based mobile robots through the introduction of a haptic evaluation toolkit.","PeriodicalId":92076,"journal":{"name":"Journal of human-robot interaction","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.5898/JHRI.4.1.Brooks","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71218070","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tina Setter, A. Fouraker, M. Egerstedt, H. Kawashima
This paper investigates how haptic interactions can be defined for enabling a single operator to control and interact with a team of mobile robots. Since there is no unique or canonical mapping from the swarm configuration to the forces experienced by the operator, a suitable mapping must be developed. To this end, multi-agent manipulability is proposed as a potentially useful mapping, whereby the forces experienced by the operator relate to how inputs, injected at precise locations in the team, translate to swarm-level motions. Small forces correspond to directions in which it is easy to move the swarm, while larger forces correspond to more costly directions. Initial experimental results support the viability of the proposed, haptic, human-swarm interaction mapping, through a user study where operators are tasked with driving a collection of robots through a series of way points.
{"title":"Haptic interactions with multi-robot swarms using manipulability","authors":"Tina Setter, A. Fouraker, M. Egerstedt, H. Kawashima","doi":"10.5898/JHRI.4.1.Setter","DOIUrl":"https://doi.org/10.5898/JHRI.4.1.Setter","url":null,"abstract":"This paper investigates how haptic interactions can be defined for enabling a single operator to control and interact with a team of mobile robots. Since there is no unique or canonical mapping from the swarm configuration to the forces experienced by the operator, a suitable mapping must be developed. To this end, multi-agent manipulability is proposed as a potentially useful mapping, whereby the forces experienced by the operator relate to how inputs, injected at precise locations in the team, translate to swarm-level motions. Small forces correspond to directions in which it is easy to move the swarm, while larger forces correspond to more costly directions. Initial experimental results support the viability of the proposed, haptic, human-swarm interaction mapping, through a user study where operators are tasked with driving a collection of robots through a series of way points.","PeriodicalId":92076,"journal":{"name":"Journal of human-robot interaction","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.5898/JHRI.4.1.Setter","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71218124","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-07-22DOI: 10.5898/JHRI.4.1.MacLean
Karon E Maclean, A. Frisoli
Coming generations of robots will share physical space with humans, engaging in contact interactions (physical Human Robot Interaction, or pHRI) as they carry out cooperative tasks. This special issue turns a spotlight on the specific roles that crafted haptic interaction can play in cooperation and communication between a human and a robotic partner, from the viewpoints of human needs, capabilities, and expectations and of engineering implementation.
{"title":"Introduction to journal of human-robot interaction","authors":"Karon E Maclean, A. Frisoli","doi":"10.5898/JHRI.4.1.MacLean","DOIUrl":"https://doi.org/10.5898/JHRI.4.1.MacLean","url":null,"abstract":"Coming generations of robots will share physical space with humans, engaging in contact interactions (physical Human Robot Interaction, or pHRI) as they carry out cooperative tasks. This special issue turns a spotlight on the specific roles that crafted haptic interaction can play in cooperation and communication between a human and a robotic partner, from the viewpoints of human needs, capabilities, and expectations and of engineering implementation.","PeriodicalId":92076,"journal":{"name":"Journal of human-robot interaction","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.5898/JHRI.4.1.MacLean","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71218083","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Warfighter safety can be significantly increased by offloading critical reconnaissance and surveillance missions to robotic assets. The subtleties of these tasks require significant operator involvement--usually carried out locally to the robot's deployment. Human soldiers use gestures to communicate movements and commands when engaged in this type of task. While considerable work has been done with robots visually observing humans to interpret their gestures, we propose a simpler, more field-appropriate system that allows robot operators to use their natural movements and gestures (via inertial measurement units [IMUs]) to teleoperate a robot while reducing the physical, as well as the cognitive, load on the soldier. This paper describes an operator control interface implemented on a smartphone, in contrast to the proprietary robot controllers typically used. The controller utilizes the device's IMUs, or attitude sensors, to bypass the touchscreen while accepting user input via gestures; this addresses a primary concern for gloved users in dirty environments where touchscreens lack reliability. We also propose that it provides a less visually-intense alternative for control, freeing up the soldier's cognitive load toward other functions. We present details of the attitude-based control software, as well as the design heuristics resulting from its iterative build-test-rebuild development. Additionally, results from a set of user studies are presented, showing that as a controller, this technique performs as well, or better than, other screen-based control systems, even when ignoring its advantages to gloved users. Twenty-five users were recruited to assess usability of these attitude-aware controls, testing their suitability for both driving and camera manipulation tasks. Participants drove a small tracked robot on an indoor course using the attitude-aware controller and a virtual [touchscreen] joystick, while metrics regarding performance, mental workload, and user satisfaction were collected. Results indicate that the tilt controller is preferred by 64% of users and performs equally as well, if not better to the alternative, on most performance metrics. These results support the development of a smartphone-based control option for military robotics, with a focus on more physical, attitude-based input methods that overcome deficiencies of current touch-based systems, namely lack of physical feedback, high attention demands, and unreliability in field environments.
{"title":"User-centered design of an attitude-aware controller for ground reconnaissance robots","authors":"A. Walker, David P. Miller, Chen Ling","doi":"10.5898/JHRI.4.1.Walker","DOIUrl":"https://doi.org/10.5898/JHRI.4.1.Walker","url":null,"abstract":"Warfighter safety can be significantly increased by offloading critical reconnaissance and surveillance missions to robotic assets. The subtleties of these tasks require significant operator involvement--usually carried out locally to the robot's deployment. Human soldiers use gestures to communicate movements and commands when engaged in this type of task. While considerable work has been done with robots visually observing humans to interpret their gestures, we propose a simpler, more field-appropriate system that allows robot operators to use their natural movements and gestures (via inertial measurement units [IMUs]) to teleoperate a robot while reducing the physical, as well as the cognitive, load on the soldier. This paper describes an operator control interface implemented on a smartphone, in contrast to the proprietary robot controllers typically used. The controller utilizes the device's IMUs, or attitude sensors, to bypass the touchscreen while accepting user input via gestures; this addresses a primary concern for gloved users in dirty environments where touchscreens lack reliability. We also propose that it provides a less visually-intense alternative for control, freeing up the soldier's cognitive load toward other functions. We present details of the attitude-based control software, as well as the design heuristics resulting from its iterative build-test-rebuild development. Additionally, results from a set of user studies are presented, showing that as a controller, this technique performs as well, or better than, other screen-based control systems, even when ignoring its advantages to gloved users. Twenty-five users were recruited to assess usability of these attitude-aware controls, testing their suitability for both driving and camera manipulation tasks. Participants drove a small tracked robot on an indoor course using the attitude-aware controller and a virtual [touchscreen] joystick, while metrics regarding performance, mental workload, and user satisfaction were collected. Results indicate that the tilt controller is preferred by 64% of users and performs equally as well, if not better to the alternative, on most performance metrics. These results support the development of a smartphone-based control option for military robotics, with a focus on more physical, attitude-based input methods that overcome deficiencies of current touch-based systems, namely lack of physical feedback, high attention demands, and unreliability in field environments.","PeriodicalId":92076,"journal":{"name":"Journal of human-robot interaction","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.5898/JHRI.4.1.Walker","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71218135","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-05-09DOI: 10.5898/JHRI.4.2.Williams
T. Williams, Priscilla Briggs, Matthias Scheutz
As future human-robot teams are envisioned for a variety of application domains, researchers have begun to investigate how humans and robots can communicate effectively and naturally in the context of human-robot team tasks. While a growing body of work is focused on human-robot communication and human perceptions thereof, there is currently little work on human perceptions of robot-robot communication. Understanding how robots should communicate information to each other in the presence of human teammates is an important open question for human-robot teaming. In this paper, we present two human-robot interaction (HRI) experiments investigating the human perception of verbal and silent robot-robot communication as part of a human-robot team task. The results suggest that silent communication of task-dependent, human-understandable information among robots is perceived as creepy by cooperative, co-located human teammates. Hence, we propose that, absent specific evidence to the contrary, robots in cooperative human-robot team settings need to be sensitive to human expectations about overt communication, and we encourage future work to investigate possible ways to modulate such expectations.
{"title":"Covert robot-robot communication","authors":"T. Williams, Priscilla Briggs, Matthias Scheutz","doi":"10.5898/JHRI.4.2.Williams","DOIUrl":"https://doi.org/10.5898/JHRI.4.2.Williams","url":null,"abstract":"As future human-robot teams are envisioned for a variety of application domains, researchers have begun to investigate how humans and robots can communicate effectively and naturally in the context of human-robot team tasks. While a growing body of work is focused on human-robot communication and human perceptions thereof, there is currently little work on human perceptions of robot-robot communication. Understanding how robots should communicate information to each other in the presence of human teammates is an important open question for human-robot teaming. In this paper, we present two human-robot interaction (HRI) experiments investigating the human perception of verbal and silent robot-robot communication as part of a human-robot team task. The results suggest that silent communication of task-dependent, human-understandable information among robots is perceived as creepy by cooperative, co-located human teammates. Hence, we propose that, absent specific evidence to the contrary, robots in cooperative human-robot team settings need to be sensitive to human expectations about overt communication, and we encourage future work to investigate possible ways to modulate such expectations.","PeriodicalId":92076,"journal":{"name":"Journal of human-robot interaction","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2015-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.5898/JHRI.4.2.Williams","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71218196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}