The Design of Informative Take-Over Requests for Semi-Autonomous Cyber-Physical Systems: Combining Spoken Language and Visual Icons in a Drone-Controller Setting
Ashwini Gundappa, Emilia Ellsiepen, Lukas Schmitz, Frederik Wiehr, Vera Demberg
{"title":"The Design of Informative Take-Over Requests for Semi-Autonomous Cyber-Physical Systems: Combining Spoken Language and Visual Icons in a Drone-Controller Setting","authors":"Ashwini Gundappa, Emilia Ellsiepen, Lukas Schmitz, Frederik Wiehr, Vera Demberg","doi":"arxiv-2409.08253","DOIUrl":null,"url":null,"abstract":"The question of how cyber-physical systems should interact with human\npartners that can take over control or exert oversight is becoming more\npressing, as these systems are deployed for an ever larger range of tasks.\nDrawing on the literatures on handing over control during semi-autonomous\ndriving and human-robot interaction, we propose a design of a take-over request\nthat combines an abstract pre-alert with an informative TOR: Relevant sensor\ninformation is highlighted on the controller's display, while a spoken message\nverbalizes the reason for the TOR. We conduct our study in the context of a\nsemi-autonomous drone control scenario as our testbed. The goal of our online\nstudy is to assess in more detail what form a language-based TOR should take.\nSpecifically, we compare a full sentence condition to shorter fragments, and\ntest whether the visual highlighting should be done synchronously or\nasynchronously with the speech. Participants showed a higher accuracy in\nchoosing the correct solution with our bi-modal TOR and felt that they were\nbetter able to recognize the critical situation. Using only fragments in the\nspoken message rather than full sentences did not lead to improved accuracy or\nfaster reactions. Also, synchronizing the visual highlighting with the spoken\nmessage did not result in better accuracy and response times were even\nincreased in this condition.","PeriodicalId":501031,"journal":{"name":"arXiv - CS - Robotics","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Robotics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.08253","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
The question of how cyber-physical systems should interact with human
partners that can take over control or exert oversight is becoming more
pressing, as these systems are deployed for an ever larger range of tasks.
Drawing on the literatures on handing over control during semi-autonomous
driving and human-robot interaction, we propose a design of a take-over request
that combines an abstract pre-alert with an informative TOR: Relevant sensor
information is highlighted on the controller's display, while a spoken message
verbalizes the reason for the TOR. We conduct our study in the context of a
semi-autonomous drone control scenario as our testbed. The goal of our online
study is to assess in more detail what form a language-based TOR should take.
Specifically, we compare a full sentence condition to shorter fragments, and
test whether the visual highlighting should be done synchronously or
asynchronously with the speech. Participants showed a higher accuracy in
choosing the correct solution with our bi-modal TOR and felt that they were
better able to recognize the critical situation. Using only fragments in the
spoken message rather than full sentences did not lead to improved accuracy or
faster reactions. Also, synchronizing the visual highlighting with the spoken
message did not result in better accuracy and response times were even
increased in this condition.